Add comments and LICENSE
This commit is contained in:
174
LICENSE
Normal file
174
LICENSE
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship made available under
|
||||||
|
the License, as indicated by a copyright notice that is included in
|
||||||
|
or attached to the work (an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other
|
||||||
|
transformations represent, as a whole, an original work of authorship.
|
||||||
|
For the purposes of this License, Derivative Works shall not include
|
||||||
|
works that remain separable from, or merely link (or bind by name)
|
||||||
|
to the interfaces of, the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean, as submitted to the Licensor for inclusion
|
||||||
|
in the Work by the copyright owner or by an individual or Legal Entity
|
||||||
|
authorized to submit on behalf of the copyright owner. For the
|
||||||
|
purposes of this definition, "submit" means any form of electronic,
|
||||||
|
verbal, or written communication sent to the Licensor or its
|
||||||
|
representatives, including but not limited to communication on
|
||||||
|
electronic mailing lists, source code control systems, and issue
|
||||||
|
tracking systems that is managed by, or on behalf of, the Licensor
|
||||||
|
for the purpose of discussing and improving the Work, but excluding
|
||||||
|
communication that is conspicuously marked or designated in writing
|
||||||
|
by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any Legal Entity on behalf of
|
||||||
|
whom a Contribution has been received by the Licensor and included
|
||||||
|
within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent contributions
|
||||||
|
Licensable by such Contributor that are necessarily infringed by
|
||||||
|
their Contribution(s) alone or by the combination of their
|
||||||
|
Contribution(s) with the Work to which such Contribution(s) was
|
||||||
|
submitted. If You institute patent litigation against any entity
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
the Work or any such Contribution embodied within the Work constitutes
|
||||||
|
direct or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate as of
|
||||||
|
the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or Derivative
|
||||||
|
Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, You must include a readable copy of the
|
||||||
|
attribution notices contained within such NOTICE file, in
|
||||||
|
at least one of the following places: within a NOTICE text
|
||||||
|
file distributed as part of the Derivative Works; within
|
||||||
|
the Source form or documentation, if provided along with the
|
||||||
|
Derivative Works; or, within a display generated by the
|
||||||
|
Derivative Works, if and wherever such third-party notices
|
||||||
|
normally appear. The contents of the NOTICE file are for
|
||||||
|
informational purposes only and do not modify the License.
|
||||||
|
You may add Your own attribution notices within Derivative
|
||||||
|
Works that You distribute, alongside or in addition to the
|
||||||
|
NOTICE text from the Work, provided that such additional
|
||||||
|
attribution notices cannot be construed as modifying the
|
||||||
|
License.
|
||||||
|
|
||||||
|
You may add Your own license statement for Your modifications and
|
||||||
|
may provide additional grant of rights to use, copy, modify, merge,
|
||||||
|
publish, distribute, sublicense, and/or sell copies of the
|
||||||
|
Derivative Works, as separate terms and conditions for use,
|
||||||
|
reproduction, or distribution of Your modifications, or for such
|
||||||
|
Derivative Works as a whole, provided Your use, reproduction, and
|
||||||
|
distribution of the Work otherwise complies with the conditions
|
||||||
|
stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any conditions of TITLE,
|
||||||
|
MERCHANTIBILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely
|
||||||
|
responsible for determining the appropriateness of using or
|
||||||
|
reproducing the Work and assume any risks associated with Your
|
||||||
|
exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or exemplary damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or all other
|
||||||
|
commercial damages or losses), even if such Contributor has been
|
||||||
|
advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Liability. While redistributing the Work or
|
||||||
|
Derivative Works thereof, You may choose to offer, and charge a fee
|
||||||
|
for, acceptance of support, warranty, indemnity, or other liability
|
||||||
|
obligations and/or rights consistent with this License. However, in
|
||||||
|
accepting such obligations, You may offer such obligations only on
|
||||||
|
Your own behalf and on Your sole responsibility, not on behalf of
|
||||||
|
any other Contributor, and only if You agree to indemnify, defend,
|
||||||
|
and hold each Contributor harmless for any liability incurred by,
|
||||||
|
or claims asserted against, such Contributor by reason of your
|
||||||
|
accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
9
README.md
Normal file
9
README.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# mote-tls
|
||||||
|
|
||||||
|
TLS 1.3 client with `no_std` and no allocator support.
|
||||||
|
|
||||||
|
Based on commit [426f327](https://github.com/drogue-iot/embedded-tls/commit/426f327) from [drogue-iot/embedded-tls](https://github.com/drogue-iot/embedded-tls).
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Apache-2.0
|
||||||
@@ -17,9 +17,9 @@ use embedded_io_async::{BufRead, Read as AsyncRead, Write as AsyncWrite};
|
|||||||
|
|
||||||
pub use crate::config::*;
|
pub use crate::config::*;
|
||||||
|
|
||||||
/// Type representing an async TLS connection. An instance of this type can
|
/// An async TLS 1.3 client stream wrapping an underlying async transport.
|
||||||
/// be used to establish a TLS connection, write and read encrypted data over this connection,
|
///
|
||||||
/// and closing to free up the underlying resources.
|
/// Call [`open`](SecureStream::open) to perform the handshake before reading or writing.
|
||||||
pub struct SecureStream<'a, Socket, CipherSuite>
|
pub struct SecureStream<'a, Socket, CipherSuite>
|
||||||
where
|
where
|
||||||
Socket: AsyncRead + AsyncWrite + 'a,
|
Socket: AsyncRead + AsyncWrite + 'a,
|
||||||
@@ -42,17 +42,6 @@ where
|
|||||||
pub fn is_opened(&mut self) -> bool {
|
pub fn is_opened(&mut self) -> bool {
|
||||||
*self.opened.get_mut()
|
*self.opened.get_mut()
|
||||||
}
|
}
|
||||||
/// Create a new TLS connection with the provided context and a async I/O implementation
|
|
||||||
///
|
|
||||||
/// NOTE: The record read buffer should be sized to fit an encrypted TLS record. The size of this record
|
|
||||||
/// depends on the server configuration, but the maximum allowed value for a TLS record is 16640 bytes,
|
|
||||||
/// which should be a safe value to use.
|
|
||||||
///
|
|
||||||
/// The write record buffer can be smaller than the read buffer. During writes [`TLS_RECORD_OVERHEAD`] bytes of
|
|
||||||
/// overhead is added per record, so the buffer must at least be this large. Large writes are split into multiple
|
|
||||||
/// records if depending on the size of the write buffer.
|
|
||||||
/// The largest of the two buffers will be used to encode the TLS handshake record, hence either of the
|
|
||||||
/// buffers must at least be large enough to encode a handshake.
|
|
||||||
pub fn new(
|
pub fn new(
|
||||||
delegate: Socket,
|
delegate: Socket,
|
||||||
record_read_buf: &'a mut [u8],
|
record_read_buf: &'a mut [u8],
|
||||||
@@ -69,29 +58,16 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns a reference to the current flush policy.
|
|
||||||
///
|
|
||||||
/// The flush policy controls whether the underlying transport is flushed
|
|
||||||
/// (via its `flush()` method) after writing a TLS record.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn flush_policy(&self) -> FlushPolicy {
|
pub fn flush_policy(&self) -> FlushPolicy {
|
||||||
self.flush_policy
|
self.flush_policy
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Replace the current flush policy with the provided one.
|
|
||||||
///
|
|
||||||
/// This sets how and when the connection will call `flush()` on the
|
|
||||||
/// underlying transport after writing records.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn set_flush_policy(&mut self, policy: FlushPolicy) {
|
pub fn set_flush_policy(&mut self, policy: FlushPolicy) {
|
||||||
self.flush_policy = policy;
|
self.flush_policy = policy;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Open a TLS connection, performing the handshake with the configuration provided when
|
|
||||||
/// creating the connection instance.
|
|
||||||
///
|
|
||||||
/// Returns an error if the handshake does not proceed. If an error occurs, the connection
|
|
||||||
/// instance must be recreated.
|
|
||||||
pub async fn open<CP>(
|
pub async fn open<CP>(
|
||||||
&mut self,
|
&mut self,
|
||||||
mut context: ConnectContext<'_, CP>,
|
mut context: ConnectContext<'_, CP>,
|
||||||
@@ -128,16 +104,9 @@ where
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Encrypt and send the provided slice over the connection. The connection
|
|
||||||
/// must be opened before writing.
|
|
||||||
///
|
|
||||||
/// The slice may be buffered internally and not written to the connection immediately.
|
|
||||||
/// In this case [`Self::flush()`] should be called to force the currently buffered writes
|
|
||||||
/// to be written to the connection.
|
|
||||||
///
|
|
||||||
/// Returns the number of bytes buffered/written.
|
|
||||||
pub async fn write(&mut self, buf: &[u8]) -> Result<usize, ProtocolError> {
|
pub async fn write(&mut self, buf: &[u8]) -> Result<usize, ProtocolError> {
|
||||||
if self.is_opened() {
|
if self.is_opened() {
|
||||||
|
// Start a new ApplicationData record if none is in progress
|
||||||
if !self
|
if !self
|
||||||
.record_write_buf
|
.record_write_buf
|
||||||
.contains(ClientRecordHeader::ApplicationData)
|
.contains(ClientRecordHeader::ApplicationData)
|
||||||
@@ -159,8 +128,6 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Force all previously written, buffered bytes to be encoded into a tls record and written
|
|
||||||
/// to the connection.
|
|
||||||
pub async fn flush(&mut self) -> Result<(), ProtocolError> {
|
pub async fn flush(&mut self) -> Result<(), ProtocolError> {
|
||||||
if !self.record_write_buf.is_empty() {
|
if !self.record_write_buf.is_empty() {
|
||||||
let key_schedule = self.key_schedule.write_state();
|
let key_schedule = self.key_schedule.write_state();
|
||||||
@@ -193,7 +160,6 @@ where
|
|||||||
self.decrypted.create_read_buffer(self.record_reader.buf)
|
self.decrypted.create_read_buffer(self.record_reader.buf)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Read and decrypt data filling the provided slice.
|
|
||||||
pub async fn read(&mut self, buf: &mut [u8]) -> Result<usize, ProtocolError> {
|
pub async fn read(&mut self, buf: &mut [u8]) -> Result<usize, ProtocolError> {
|
||||||
if buf.is_empty() {
|
if buf.is_empty() {
|
||||||
return Ok(0);
|
return Ok(0);
|
||||||
@@ -206,7 +172,6 @@ where
|
|||||||
Ok(len)
|
Ok(len)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Reads buffered data. If nothing is in memory, it'll wait for a TLS record and process it.
|
|
||||||
pub async fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
pub async fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
||||||
if self.is_opened() {
|
if self.is_opened() {
|
||||||
while self.decrypted.is_empty() {
|
while self.decrypted.is_empty() {
|
||||||
@@ -240,12 +205,12 @@ where
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Close a connection instance, returning the ownership of the config, random generator and the async I/O provider.
|
|
||||||
async fn close_internal(&mut self) -> Result<(), ProtocolError> {
|
async fn close_internal(&mut self) -> Result<(), ProtocolError> {
|
||||||
self.flush().await?;
|
self.flush().await?;
|
||||||
|
|
||||||
let is_opened = self.is_opened();
|
let is_opened = self.is_opened();
|
||||||
let (write_key_schedule, read_key_schedule) = self.key_schedule.as_split();
|
let (write_key_schedule, read_key_schedule) = self.key_schedule.as_split();
|
||||||
|
// Send a close_notify alert to signal clean shutdown (RFC 8446 §6.1)
|
||||||
let slice = self.record_write_buf.write_record(
|
let slice = self.record_write_buf.write_record(
|
||||||
&ClientRecord::close_notify(is_opened),
|
&ClientRecord::close_notify(is_opened),
|
||||||
write_key_schedule,
|
write_key_schedule,
|
||||||
@@ -262,7 +227,6 @@ where
|
|||||||
self.flush_transport().await
|
self.flush_transport().await
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Close a connection instance, returning the ownership of the async I/O provider.
|
|
||||||
pub async fn close(mut self) -> Result<Socket, (Socket, ProtocolError)> {
|
pub async fn close(mut self) -> Result<Socket, (Socket, ProtocolError)> {
|
||||||
match self.close_internal().await {
|
match self.close_internal().await {
|
||||||
Ok(()) => Ok(self.delegate),
|
Ok(()) => Ok(self.delegate),
|
||||||
@@ -279,6 +243,7 @@ where
|
|||||||
where
|
where
|
||||||
Socket: Clone,
|
Socket: Clone,
|
||||||
{
|
{
|
||||||
|
// Split requires a Clone socket so both halves can independently drive the same connection
|
||||||
let (wks, rks) = self.key_schedule.as_split();
|
let (wks, rks) = self.key_schedule.as_split();
|
||||||
|
|
||||||
let reader = TlsReader {
|
let reader = TlsReader {
|
||||||
@@ -375,7 +340,6 @@ where
|
|||||||
self.decrypted.create_read_buffer(self.record_reader.buf)
|
self.decrypted.create_read_buffer(self.record_reader.buf)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Reads buffered data. If nothing is in memory, it'll wait for a TLS record and process it.
|
|
||||||
pub async fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
pub async fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
||||||
if self.opened.load(Ordering::Acquire) {
|
if self.opened.load(Ordering::Acquire) {
|
||||||
while self.decrypted.is_empty() {
|
while self.decrypted.is_empty() {
|
||||||
|
|||||||
@@ -17,9 +17,9 @@ use portable_atomic::AtomicBool;
|
|||||||
pub use crate::ProtocolError;
|
pub use crate::ProtocolError;
|
||||||
pub use crate::config::*;
|
pub use crate::config::*;
|
||||||
|
|
||||||
/// Type representing a TLS connection. An instance of this type can
|
/// Blocking TLS 1.3 client stream wrapping a synchronous transport.
|
||||||
/// be used to establish a TLS connection, write and read encrypted data over this connection,
|
///
|
||||||
/// and closing to free up the underlying resources.
|
/// Call [`open`](SecureStream::open) to perform the handshake before reading or writing.
|
||||||
pub struct SecureStream<'a, Socket, CipherSuite>
|
pub struct SecureStream<'a, Socket, CipherSuite>
|
||||||
where
|
where
|
||||||
Socket: Read + Write + 'a,
|
Socket: Read + Write + 'a,
|
||||||
@@ -43,17 +43,6 @@ where
|
|||||||
*self.opened.get_mut()
|
*self.opened.get_mut()
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create a new TLS connection with the provided context and a blocking I/O implementation
|
|
||||||
///
|
|
||||||
/// NOTE: The record read buffer should be sized to fit an encrypted TLS record. The size of this record
|
|
||||||
/// depends on the server configuration, but the maximum allowed value for a TLS record is 16640 bytes,
|
|
||||||
/// which should be a safe value to use.
|
|
||||||
///
|
|
||||||
/// The write record buffer can be smaller than the read buffer. During writes [`TLS_RECORD_OVERHEAD`] bytes of
|
|
||||||
/// overhead is added per record, so the buffer must at least be this large. Large writes are split into multiple
|
|
||||||
/// records if depending on the size of the write buffer.
|
|
||||||
/// The largest of the two buffers will be used to encode the TLS handshake record, hence either of the
|
|
||||||
/// buffers must at least be large enough to encode a handshake.
|
|
||||||
pub fn new(
|
pub fn new(
|
||||||
delegate: Socket,
|
delegate: Socket,
|
||||||
record_read_buf: &'a mut [u8],
|
record_read_buf: &'a mut [u8],
|
||||||
@@ -70,29 +59,16 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns a reference to the current flush policy.
|
|
||||||
///
|
|
||||||
/// The flush policy controls whether the underlying transport is flushed
|
|
||||||
/// (via its `flush()` method) after writing a TLS record.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn flush_policy(&self) -> FlushPolicy {
|
pub fn flush_policy(&self) -> FlushPolicy {
|
||||||
self.flush_policy
|
self.flush_policy
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Replace the current flush policy with the provided one.
|
|
||||||
///
|
|
||||||
/// This sets how and when the connection will call `flush()` on the
|
|
||||||
/// underlying transport after writing records.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn set_flush_policy(&mut self, policy: FlushPolicy) {
|
pub fn set_flush_policy(&mut self, policy: FlushPolicy) {
|
||||||
self.flush_policy = policy;
|
self.flush_policy = policy;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Open a TLS connection, performing the handshake with the configuration provided when
|
|
||||||
/// creating the connection instance.
|
|
||||||
///
|
|
||||||
/// Returns an error if the handshake does not proceed. If an error occurs, the connection
|
|
||||||
/// instance must be recreated.
|
|
||||||
pub fn open<CP>(&mut self, mut context: ConnectContext<CP>) -> Result<(), ProtocolError>
|
pub fn open<CP>(&mut self, mut context: ConnectContext<CP>) -> Result<(), ProtocolError>
|
||||||
where
|
where
|
||||||
CP: CryptoBackend<CipherSuite = CipherSuite>,
|
CP: CryptoBackend<CipherSuite = CipherSuite>,
|
||||||
@@ -124,16 +100,9 @@ where
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Encrypt and send the provided slice over the connection. The connection
|
|
||||||
/// must be opened before writing.
|
|
||||||
///
|
|
||||||
/// The slice may be buffered internally and not written to the connection immediately.
|
|
||||||
/// In this case [`Self::flush()`] should be called to force the currently buffered writes
|
|
||||||
/// to be written to the connection.
|
|
||||||
///
|
|
||||||
/// Returns the number of bytes buffered/written.
|
|
||||||
pub fn write(&mut self, buf: &[u8]) -> Result<usize, ProtocolError> {
|
pub fn write(&mut self, buf: &[u8]) -> Result<usize, ProtocolError> {
|
||||||
if self.is_opened() {
|
if self.is_opened() {
|
||||||
|
// Start a new ApplicationData record if none is in progress
|
||||||
if !self
|
if !self
|
||||||
.record_write_buf
|
.record_write_buf
|
||||||
.contains(ClientRecordHeader::ApplicationData)
|
.contains(ClientRecordHeader::ApplicationData)
|
||||||
@@ -155,8 +124,6 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Force all previously written, buffered bytes to be encoded into a tls record and written
|
|
||||||
/// to the connection.
|
|
||||||
pub fn flush(&mut self) -> Result<(), ProtocolError> {
|
pub fn flush(&mut self) -> Result<(), ProtocolError> {
|
||||||
if !self.record_write_buf.is_empty() {
|
if !self.record_write_buf.is_empty() {
|
||||||
let key_schedule = self.key_schedule.write_state();
|
let key_schedule = self.key_schedule.write_state();
|
||||||
@@ -185,7 +152,6 @@ where
|
|||||||
self.decrypted.create_read_buffer(self.record_reader.buf)
|
self.decrypted.create_read_buffer(self.record_reader.buf)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Read and decrypt data filling the provided slice.
|
|
||||||
pub fn read(&mut self, buf: &mut [u8]) -> Result<usize, ProtocolError> {
|
pub fn read(&mut self, buf: &mut [u8]) -> Result<usize, ProtocolError> {
|
||||||
if buf.is_empty() {
|
if buf.is_empty() {
|
||||||
return Ok(0);
|
return Ok(0);
|
||||||
@@ -198,7 +164,6 @@ where
|
|||||||
Ok(len)
|
Ok(len)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Reads buffered data. If nothing is in memory, it'll wait for a TLS record and process it.
|
|
||||||
pub fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
pub fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
||||||
if self.is_opened() {
|
if self.is_opened() {
|
||||||
while self.decrypted.is_empty() {
|
while self.decrypted.is_empty() {
|
||||||
@@ -235,6 +200,7 @@ where
|
|||||||
|
|
||||||
let is_opened = self.is_opened();
|
let is_opened = self.is_opened();
|
||||||
let (write_key_schedule, read_key_schedule) = self.key_schedule.as_split();
|
let (write_key_schedule, read_key_schedule) = self.key_schedule.as_split();
|
||||||
|
// Send a close_notify alert to signal clean shutdown (RFC 8446 §6.1)
|
||||||
let slice = self.record_write_buf.write_record(
|
let slice = self.record_write_buf.write_record(
|
||||||
&ClientRecord::close_notify(is_opened),
|
&ClientRecord::close_notify(is_opened),
|
||||||
write_key_schedule,
|
write_key_schedule,
|
||||||
@@ -252,7 +218,6 @@ where
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Close a connection instance, returning the ownership of the I/O provider.
|
|
||||||
pub fn close(mut self) -> Result<Socket, (Socket, ProtocolError)> {
|
pub fn close(mut self) -> Result<Socket, (Socket, ProtocolError)> {
|
||||||
match self.close_internal() {
|
match self.close_internal() {
|
||||||
Ok(()) => Ok(self.delegate),
|
Ok(()) => Ok(self.delegate),
|
||||||
@@ -365,7 +330,6 @@ where
|
|||||||
self.decrypted.create_read_buffer(self.record_reader.buf)
|
self.decrypted.create_read_buffer(self.record_reader.buf)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Reads buffered data. If nothing is in memory, it'll wait for a TLS record and process it.
|
|
||||||
pub fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
pub fn read_buffered(&mut self) -> Result<ReadBuffer<'_>, ProtocolError> {
|
||||||
if self.opened.load(Ordering::Acquire) {
|
if self.opened.load(Ordering::Acquire) {
|
||||||
while self.decrypted.is_empty() {
|
while self.decrypted.is_empty() {
|
||||||
|
|||||||
@@ -157,10 +157,6 @@ impl<'b> CryptoBuffer<'b> {
|
|||||||
|
|
||||||
pub(crate) fn offset(self, offset: usize) -> CryptoBuffer<'b> {
|
pub(crate) fn offset(self, offset: usize) -> CryptoBuffer<'b> {
|
||||||
let new_len = self.len + self.offset - offset;
|
let new_len = self.len + self.offset - offset;
|
||||||
/*info!(
|
|
||||||
"offset({}) len({}) -> offset({}), len({})",
|
|
||||||
self.offset, self.len, offset, new_len
|
|
||||||
);*/
|
|
||||||
CryptoBuffer {
|
CryptoBuffer {
|
||||||
buf: self.buf,
|
buf: self.buf,
|
||||||
len: new_len,
|
len: new_len,
|
||||||
|
|||||||
@@ -14,46 +14,38 @@ use heapless::Vec;
|
|||||||
impl TryInto<&'static webpki::SignatureAlgorithm> for SignatureScheme {
|
impl TryInto<&'static webpki::SignatureAlgorithm> for SignatureScheme {
|
||||||
type Error = ProtocolError;
|
type Error = ProtocolError;
|
||||||
fn try_into(self) -> Result<&'static webpki::SignatureAlgorithm, Self::Error> {
|
fn try_into(self) -> Result<&'static webpki::SignatureAlgorithm, Self::Error> {
|
||||||
// TODO: support other schemes via 'alloc' feature
|
#[allow(clippy::match_same_arms)]
|
||||||
#[allow(clippy::match_same_arms)] // Style
|
|
||||||
match self {
|
match self {
|
||||||
SignatureScheme::RsaPkcs1Sha256
|
SignatureScheme::RsaPkcs1Sha256
|
||||||
| SignatureScheme::RsaPkcs1Sha384
|
| SignatureScheme::RsaPkcs1Sha384
|
||||||
| SignatureScheme::RsaPkcs1Sha512 => Err(ProtocolError::InvalidSignatureScheme),
|
| SignatureScheme::RsaPkcs1Sha512 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* ECDSA algorithms */
|
|
||||||
SignatureScheme::EcdsaSecp256r1Sha256 => Ok(&webpki::ECDSA_P256_SHA256),
|
SignatureScheme::EcdsaSecp256r1Sha256 => Ok(&webpki::ECDSA_P256_SHA256),
|
||||||
SignatureScheme::EcdsaSecp384r1Sha384 => Ok(&webpki::ECDSA_P384_SHA384),
|
SignatureScheme::EcdsaSecp384r1Sha384 => Ok(&webpki::ECDSA_P384_SHA384),
|
||||||
SignatureScheme::EcdsaSecp521r1Sha512 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::EcdsaSecp521r1Sha512 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* RSASSA-PSS algorithms with public key OID rsaEncryption */
|
|
||||||
SignatureScheme::RsaPssRsaeSha256
|
SignatureScheme::RsaPssRsaeSha256
|
||||||
| SignatureScheme::RsaPssRsaeSha384
|
| SignatureScheme::RsaPssRsaeSha384
|
||||||
| SignatureScheme::RsaPssRsaeSha512 => Err(ProtocolError::InvalidSignatureScheme),
|
| SignatureScheme::RsaPssRsaeSha512 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* EdDSA algorithms */
|
|
||||||
SignatureScheme::Ed25519 => Ok(&webpki::ED25519),
|
SignatureScheme::Ed25519 => Ok(&webpki::ED25519),
|
||||||
SignatureScheme::Ed448
|
SignatureScheme::Ed448
|
||||||
| SignatureScheme::Sha224Ecdsa
|
| SignatureScheme::Sha224Ecdsa
|
||||||
| SignatureScheme::Sha224Rsa
|
| SignatureScheme::Sha224Rsa
|
||||||
| SignatureScheme::Sha224Dsa => Err(ProtocolError::InvalidSignatureScheme),
|
| SignatureScheme::Sha224Dsa => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* RSASSA-PSS algorithms with public key OID RSASSA-PSS */
|
|
||||||
SignatureScheme::RsaPssPssSha256
|
SignatureScheme::RsaPssPssSha256
|
||||||
| SignatureScheme::RsaPssPssSha384
|
| SignatureScheme::RsaPssPssSha384
|
||||||
| SignatureScheme::RsaPssPssSha512 => Err(ProtocolError::InvalidSignatureScheme),
|
| SignatureScheme::RsaPssPssSha512 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* Legacy algorithms */
|
|
||||||
SignatureScheme::RsaPkcs1Sha1 | SignatureScheme::EcdsaSha1 => {
|
SignatureScheme::RsaPkcs1Sha1 | SignatureScheme::EcdsaSha1 => {
|
||||||
Err(ProtocolError::InvalidSignatureScheme)
|
Err(ProtocolError::InvalidSignatureScheme)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Ml-DSA */
|
|
||||||
SignatureScheme::MlDsa44 | SignatureScheme::MlDsa65 | SignatureScheme::MlDsa87 => {
|
SignatureScheme::MlDsa44 | SignatureScheme::MlDsa65 | SignatureScheme::MlDsa87 => {
|
||||||
Err(ProtocolError::InvalidSignatureScheme)
|
Err(ProtocolError::InvalidSignatureScheme)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Brainpool */
|
|
||||||
SignatureScheme::Sha256BrainpoolP256r1
|
SignatureScheme::Sha256BrainpoolP256r1
|
||||||
| SignatureScheme::Sha384BrainpoolP384r1
|
| SignatureScheme::Sha384BrainpoolP384r1
|
||||||
| SignatureScheme::Sha512BrainpoolP512r1 => Err(ProtocolError::InvalidSignatureScheme),
|
| SignatureScheme::Sha512BrainpoolP512r1 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
@@ -70,17 +62,14 @@ impl TryInto<&'static webpki::SignatureAlgorithm> for SignatureScheme {
|
|||||||
SignatureScheme::RsaPkcs1Sha384 => Ok(&webpki::RSA_PKCS1_2048_8192_SHA384),
|
SignatureScheme::RsaPkcs1Sha384 => Ok(&webpki::RSA_PKCS1_2048_8192_SHA384),
|
||||||
SignatureScheme::RsaPkcs1Sha512 => Ok(&webpki::RSA_PKCS1_2048_8192_SHA512),
|
SignatureScheme::RsaPkcs1Sha512 => Ok(&webpki::RSA_PKCS1_2048_8192_SHA512),
|
||||||
|
|
||||||
/* ECDSA algorithms */
|
|
||||||
SignatureScheme::EcdsaSecp256r1Sha256 => Ok(&webpki::ECDSA_P256_SHA256),
|
SignatureScheme::EcdsaSecp256r1Sha256 => Ok(&webpki::ECDSA_P256_SHA256),
|
||||||
SignatureScheme::EcdsaSecp384r1Sha384 => Ok(&webpki::ECDSA_P384_SHA384),
|
SignatureScheme::EcdsaSecp384r1Sha384 => Ok(&webpki::ECDSA_P384_SHA384),
|
||||||
SignatureScheme::EcdsaSecp521r1Sha512 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::EcdsaSecp521r1Sha512 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* RSASSA-PSS algorithms with public key OID rsaEncryption */
|
|
||||||
SignatureScheme::RsaPssRsaeSha256 => Ok(&webpki::RSA_PSS_2048_8192_SHA256_LEGACY_KEY),
|
SignatureScheme::RsaPssRsaeSha256 => Ok(&webpki::RSA_PSS_2048_8192_SHA256_LEGACY_KEY),
|
||||||
SignatureScheme::RsaPssRsaeSha384 => Ok(&webpki::RSA_PSS_2048_8192_SHA384_LEGACY_KEY),
|
SignatureScheme::RsaPssRsaeSha384 => Ok(&webpki::RSA_PSS_2048_8192_SHA384_LEGACY_KEY),
|
||||||
SignatureScheme::RsaPssRsaeSha512 => Ok(&webpki::RSA_PSS_2048_8192_SHA512_LEGACY_KEY),
|
SignatureScheme::RsaPssRsaeSha512 => Ok(&webpki::RSA_PSS_2048_8192_SHA512_LEGACY_KEY),
|
||||||
|
|
||||||
/* EdDSA algorithms */
|
|
||||||
SignatureScheme::Ed25519 => Ok(&webpki::ED25519),
|
SignatureScheme::Ed25519 => Ok(&webpki::ED25519),
|
||||||
SignatureScheme::Ed448 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::Ed448 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
@@ -88,21 +77,17 @@ impl TryInto<&'static webpki::SignatureAlgorithm> for SignatureScheme {
|
|||||||
SignatureScheme::Sha224Rsa => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::Sha224Rsa => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::Sha224Dsa => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::Sha224Dsa => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* RSASSA-PSS algorithms with public key OID RSASSA-PSS */
|
|
||||||
SignatureScheme::RsaPssPssSha256 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::RsaPssPssSha256 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::RsaPssPssSha384 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::RsaPssPssSha384 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::RsaPssPssSha512 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::RsaPssPssSha512 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* Legacy algorithms */
|
|
||||||
SignatureScheme::RsaPkcs1Sha1 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::RsaPkcs1Sha1 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::EcdsaSha1 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::EcdsaSha1 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* MlDsa */
|
|
||||||
SignatureScheme::MlDsa44 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::MlDsa44 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::MlDsa65 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::MlDsa65 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::MlDsa87 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::MlDsa87 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
|
|
||||||
/* Brainpool */
|
|
||||||
SignatureScheme::Sha256BrainpoolP256r1 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::Sha256BrainpoolP256r1 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::Sha384BrainpoolP384r1 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::Sha384BrainpoolP384r1 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
SignatureScheme::Sha512BrainpoolP512r1 => Err(ProtocolError::InvalidSignatureScheme),
|
SignatureScheme::Sha512BrainpoolP512r1 => Err(ProtocolError::InvalidSignatureScheme),
|
||||||
@@ -194,7 +179,6 @@ fn verify_signature(
|
|||||||
) -> Result<(), ProtocolError> {
|
) -> Result<(), ProtocolError> {
|
||||||
let mut verified = false;
|
let mut verified = false;
|
||||||
if !certificate.entries.is_empty() {
|
if !certificate.entries.is_empty() {
|
||||||
// TODO: Support intermediates...
|
|
||||||
if let CertificateEntryRef::X509(certificate) = certificate.entries[0] {
|
if let CertificateEntryRef::X509(certificate) = certificate.entries[0] {
|
||||||
let cert = webpki::EndEntityCert::try_from(certificate).map_err(|e| {
|
let cert = webpki::EndEntityCert::try_from(certificate).map_err(|e| {
|
||||||
warn!("ProtocolError loading cert: {:?}", e);
|
warn!("ProtocolError loading cert: {:?}", e);
|
||||||
@@ -240,7 +224,6 @@ fn verify_certificate(
|
|||||||
trace!("We got {} certificate entries", certificate.entries.len());
|
trace!("We got {} certificate entries", certificate.entries.len());
|
||||||
|
|
||||||
if !certificate.entries.is_empty() {
|
if !certificate.entries.is_empty() {
|
||||||
// TODO: Support intermediates...
|
|
||||||
if let CertificateEntryRef::X509(certificate) = certificate.entries[0] {
|
if let CertificateEntryRef::X509(certificate) = certificate.entries[0] {
|
||||||
let cert = webpki::EndEntityCert::try_from(certificate).map_err(|e| {
|
let cert = webpki::EndEntityCert::try_from(certificate).map_err(|e| {
|
||||||
warn!("ProtocolError loading cert: {:?}", e);
|
warn!("ProtocolError loading cert: {:?}", e);
|
||||||
@@ -250,7 +233,6 @@ fn verify_certificate(
|
|||||||
let time = if let Some(now) = now {
|
let time = if let Some(now) = now {
|
||||||
webpki::Time::from_seconds_since_unix_epoch(now)
|
webpki::Time::from_seconds_since_unix_epoch(now)
|
||||||
} else {
|
} else {
|
||||||
// If no clock is provided, the validity check will fail
|
|
||||||
webpki::Time::from_seconds_since_unix_epoch(0)
|
webpki::Time::from_seconds_since_unix_epoch(0)
|
||||||
};
|
};
|
||||||
info!("Certificate is loaded!");
|
info!("Certificate is loaded!");
|
||||||
|
|||||||
@@ -52,11 +52,8 @@ pub const RSA_PKCS1_SHA512: AlgorithmIdentifier = AlgorithmIdentifier {
|
|||||||
#[asn1(type = "INTEGER")]
|
#[asn1(type = "INTEGER")]
|
||||||
#[repr(u8)]
|
#[repr(u8)]
|
||||||
pub enum Version {
|
pub enum Version {
|
||||||
/// Version 1 (default)
|
|
||||||
V1 = 0,
|
V1 = 0,
|
||||||
/// Version 2
|
|
||||||
V2 = 1,
|
V2 = 1,
|
||||||
/// Version 3
|
|
||||||
V3 = 2,
|
V3 = 2,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6,15 +6,13 @@ use crate::parse_buffer::ParseBuffer;
|
|||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub struct ChangeCipherSpec {}
|
pub struct ChangeCipherSpec {}
|
||||||
|
|
||||||
#[allow(clippy::unnecessary_wraps)] // TODO
|
#[allow(clippy::unnecessary_wraps)]
|
||||||
impl ChangeCipherSpec {
|
impl ChangeCipherSpec {
|
||||||
pub fn new() -> Self {
|
pub fn new() -> Self {
|
||||||
Self {}
|
Self {}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn read(_rx_buf: &mut [u8]) -> Result<Self, ProtocolError> {
|
pub fn read(_rx_buf: &mut [u8]) -> Result<Self, ProtocolError> {
|
||||||
// info!("change cipher spec of len={}", rx_buf.len());
|
|
||||||
// TODO: Decode data
|
|
||||||
Ok(Self {})
|
Ok(Self {})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ use p256::ecdh::SharedSecret;
|
|||||||
|
|
||||||
pub struct CryptoEngine {}
|
pub struct CryptoEngine {}
|
||||||
|
|
||||||
#[allow(clippy::unused_self, clippy::needless_pass_by_value)] // TODO
|
#[allow(clippy::unused_self, clippy::needless_pass_by_value)]
|
||||||
impl CryptoEngine {
|
impl CryptoEngine {
|
||||||
pub fn new(_group: NamedGroup, _shared: SharedSecret) -> Self {
|
pub fn new(_group: NamedGroup, _shared: SharedSecret) -> Self {
|
||||||
Self {}
|
Self {}
|
||||||
|
|||||||
@@ -27,12 +27,7 @@ impl DecryptedReadHandler<'_> {
|
|||||||
);
|
);
|
||||||
|
|
||||||
let offset = unsafe {
|
let offset = unsafe {
|
||||||
// SAFETY: The assertion above ensures `slice` is a subslice of the read buffer.
|
|
||||||
// This, in turn, ensures we don't violate safety constraints of `offset_from`.
|
|
||||||
|
|
||||||
// TODO: We are only assuming here that the pointers are derived from the read
|
|
||||||
// buffer. While this is reasonable, and we don't do any pointer magic,
|
|
||||||
// it's not an invariant.
|
|
||||||
slice_ptrs.start.offset_from(self.source_buffer.start) as usize
|
slice_ptrs.start.offset_from(self.source_buffer.start) as usize
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -51,9 +46,6 @@ impl DecryptedReadHandler<'_> {
|
|||||||
}
|
}
|
||||||
ServerRecord::ChangeCipherSpec(_) => Err(ProtocolError::InternalError),
|
ServerRecord::ChangeCipherSpec(_) => Err(ProtocolError::InternalError),
|
||||||
ServerRecord::Handshake(ServerHandshake::NewSessionTicket(_)) => {
|
ServerRecord::Handshake(ServerHandshake::NewSessionTicket(_)) => {
|
||||||
// TODO: we should validate extensions and abort. We can do this automatically
|
|
||||||
// as long as the connection is unsplit, however, split connections must be aborted
|
|
||||||
// by the user.
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
ServerRecord::Handshake(_) => {
|
ServerRecord::Handshake(_) => {
|
||||||
|
|||||||
@@ -19,9 +19,9 @@ use typenum::{Sum, U10, U12, U16, U32};
|
|||||||
|
|
||||||
pub use crate::extensions::extension_data::max_fragment_length::MaxFragmentLength;
|
pub use crate::extensions::extension_data::max_fragment_length::MaxFragmentLength;
|
||||||
|
|
||||||
|
/// Extra bytes required per record for the TLS 1.3 header, authentication tag, and inner content type.
|
||||||
pub const TLS_RECORD_OVERHEAD: usize = 128;
|
pub const TLS_RECORD_OVERHEAD: usize = 128;
|
||||||
|
|
||||||
// longest label is 12b -> buf <= 2 + 1 + 6 + longest + 1 + hash_out = hash_out + 22
|
|
||||||
type LongestLabel = U12;
|
type LongestLabel = U12;
|
||||||
type LabelOverhead = U10;
|
type LabelOverhead = U10;
|
||||||
type LabelBuffer<CipherSuite> = Sum<
|
type LabelBuffer<CipherSuite> = Sum<
|
||||||
@@ -29,7 +29,7 @@ type LabelBuffer<CipherSuite> = Sum<
|
|||||||
Sum<LongestLabel, LabelOverhead>,
|
Sum<LongestLabel, LabelOverhead>,
|
||||||
>;
|
>;
|
||||||
|
|
||||||
/// Represents a TLS 1.3 cipher suite
|
/// Associates a cipher, key/IV lengths, hash algorithm, and label buffer size for a TLS 1.3 cipher suite.
|
||||||
pub trait TlsCipherSuite {
|
pub trait TlsCipherSuite {
|
||||||
const CODE_POINT: u16;
|
const CODE_POINT: u16;
|
||||||
type Cipher: KeyInit<KeySize = Self::KeyLen> + AeadInPlace<NonceSize = Self::IvLen>;
|
type Cipher: KeyInit<KeySize = Self::KeyLen> + AeadInPlace<NonceSize = Self::IvLen>;
|
||||||
@@ -62,35 +62,23 @@ impl TlsCipherSuite for Aes256GcmSha384 {
|
|||||||
type LabelBufferSize = LabelBuffer<Self>;
|
type LabelBufferSize = LabelBuffer<Self>;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A TLS 1.3 verifier.
|
/// Certificate and server-identity verification interface. Implement to enforce PKI validation.
|
||||||
///
|
|
||||||
/// The verifier is responsible for verifying certificates and signatures. Since certificate verification is
|
|
||||||
/// an expensive process, this trait allows clients to choose how much verification should take place,
|
|
||||||
/// and also to skip the verification if the server is verified through other means (I.e. a pre-shared key).
|
|
||||||
pub trait Verifier<CipherSuite>
|
pub trait Verifier<CipherSuite>
|
||||||
where
|
where
|
||||||
CipherSuite: TlsCipherSuite,
|
CipherSuite: TlsCipherSuite,
|
||||||
{
|
{
|
||||||
/// Host verification is enabled by passing a server hostname.
|
|
||||||
fn set_hostname_verification(&mut self, hostname: &str) -> Result<(), crate::ProtocolError>;
|
fn set_hostname_verification(&mut self, hostname: &str) -> Result<(), crate::ProtocolError>;
|
||||||
|
|
||||||
/// Verify a certificate.
|
|
||||||
///
|
|
||||||
/// The handshake transcript up to this point and the server certificate is provided
|
|
||||||
/// for the implementation to use. The verifier is responsible for resolving the CA
|
|
||||||
/// certificate internally.
|
|
||||||
fn verify_certificate(
|
fn verify_certificate(
|
||||||
&mut self,
|
&mut self,
|
||||||
transcript: &CipherSuite::Hash,
|
transcript: &CipherSuite::Hash,
|
||||||
cert: CertificateRef,
|
cert: CertificateRef,
|
||||||
) -> Result<(), ProtocolError>;
|
) -> Result<(), ProtocolError>;
|
||||||
|
|
||||||
/// Verify the certificate signature.
|
|
||||||
///
|
|
||||||
/// The signature verification uses the transcript and certificate provided earlier to decode the provided signature.
|
|
||||||
fn verify_signature(&mut self, verify: HandshakeVerifyRef) -> Result<(), crate::ProtocolError>;
|
fn verify_signature(&mut self, verify: HandshakeVerifyRef) -> Result<(), crate::ProtocolError>;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// A [`Verifier`] that accepts any certificate without validation. Useful for testing only.
|
||||||
pub struct NoVerify;
|
pub struct NoVerify;
|
||||||
|
|
||||||
impl<CipherSuite> Verifier<CipherSuite> for NoVerify
|
impl<CipherSuite> Verifier<CipherSuite> for NoVerify
|
||||||
@@ -114,12 +102,14 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Configuration for a single TLS client connection: server name, PSK, cipher preferences, etc.
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
#[must_use = "ConnectConfig does nothing unless consumed"]
|
#[must_use = "ConnectConfig does nothing unless consumed"]
|
||||||
pub struct ConnectConfig<'a> {
|
pub struct ConnectConfig<'a> {
|
||||||
pub(crate) server_name: Option<&'a str>,
|
pub(crate) server_name: Option<&'a str>,
|
||||||
pub(crate) alpn_protocols: Option<&'a [&'a [u8]]>,
|
pub(crate) alpn_protocols: Option<&'a [&'a [u8]]>,
|
||||||
|
// PSK value and the list of identity labels to offer in the ClientHello
|
||||||
pub(crate) psk: Option<(&'a [u8], Vec<&'a [u8], 4>)>,
|
pub(crate) psk: Option<(&'a [u8], Vec<&'a [u8], 4>)>,
|
||||||
pub(crate) signature_schemes: Vec<SignatureScheme, 25>,
|
pub(crate) signature_schemes: Vec<SignatureScheme, 25>,
|
||||||
pub(crate) named_groups: Vec<NamedGroup, 13>,
|
pub(crate) named_groups: Vec<NamedGroup, 13>,
|
||||||
@@ -138,6 +128,7 @@ impl TlsClock for NoClock {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Provides the RNG, cipher suite, optional certificate verifier, and optional client signing key.
|
||||||
pub trait CryptoBackend {
|
pub trait CryptoBackend {
|
||||||
type CipherSuite: TlsCipherSuite;
|
type CipherSuite: TlsCipherSuite;
|
||||||
type Signature: AsRef<[u8]>;
|
type Signature: AsRef<[u8]>;
|
||||||
@@ -148,10 +139,6 @@ pub trait CryptoBackend {
|
|||||||
Err::<&mut NoVerify, _>(crate::ProtocolError::Unimplemented)
|
Err::<&mut NoVerify, _>(crate::ProtocolError::Unimplemented)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Provide a signing key for client certificate authentication.
|
|
||||||
///
|
|
||||||
/// The provider resolves the private key internally (e.g. from memory, flash, or a hardware
|
|
||||||
/// crypto module such as an HSM/TPM/secure element).
|
|
||||||
fn signer(
|
fn signer(
|
||||||
&mut self,
|
&mut self,
|
||||||
) -> Result<(impl signature::SignerMut<Self::Signature>, SignatureScheme), crate::ProtocolError>
|
) -> Result<(impl signature::SignerMut<Self::Signature>, SignatureScheme), crate::ProtocolError>
|
||||||
@@ -159,12 +146,6 @@ pub trait CryptoBackend {
|
|||||||
Err::<(NoSign, _), crate::ProtocolError>(crate::ProtocolError::Unimplemented)
|
Err::<(NoSign, _), crate::ProtocolError>(crate::ProtocolError::Unimplemented)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve the client certificate for mutual TLS authentication.
|
|
||||||
///
|
|
||||||
/// Return `None` if no client certificate is available (an empty certificate message will
|
|
||||||
/// be sent to the server). The data type `D` can be borrowed (`&[u8]`) or owned
|
|
||||||
/// (e.g. `heapless::Vec<u8, N>`) — the certificate is only needed long enough to encode
|
|
||||||
/// into the TLS message.
|
|
||||||
fn client_cert(&mut self) -> Option<Certificate<impl AsRef<[u8]>>> {
|
fn client_cert(&mut self) -> Option<Certificate<impl AsRef<[u8]>>> {
|
||||||
None::<Certificate<&[u8]>>
|
None::<Certificate<&[u8]>>
|
||||||
}
|
}
|
||||||
@@ -203,6 +184,7 @@ impl<S> signature::Signer<S> for NoSign {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// A [`CryptoBackend`] that skips certificate verification. Suitable for testing or constrained environments.
|
||||||
pub struct SkipVerifyProvider<'a, CipherSuite, RNG> {
|
pub struct SkipVerifyProvider<'a, CipherSuite, RNG> {
|
||||||
rng: RNG,
|
rng: RNG,
|
||||||
priv_key: Option<&'a [u8]>,
|
priv_key: Option<&'a [u8]>,
|
||||||
@@ -278,7 +260,6 @@ impl<'a, CP> ConnectContext<'a, CP>
|
|||||||
where
|
where
|
||||||
CP: CryptoBackend,
|
CP: CryptoBackend,
|
||||||
{
|
{
|
||||||
/// Create a new context with a given config and a crypto provider.
|
|
||||||
pub fn new(config: &'a ConnectConfig<'a>, crypto_provider: CP) -> Self {
|
pub fn new(config: &'a ConnectConfig<'a>, crypto_provider: CP) -> Self {
|
||||||
Self {
|
Self {
|
||||||
config,
|
config,
|
||||||
@@ -298,6 +279,7 @@ impl<'a> ConnectConfig<'a> {
|
|||||||
alpn_protocols: None,
|
alpn_protocols: None,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// RSA signature schemes are disabled by default to save code size; opt in via `alloc` feature
|
||||||
if cfg!(feature = "alloc") {
|
if cfg!(feature = "alloc") {
|
||||||
config = config.enable_rsa_signatures();
|
config = config.enable_rsa_signatures();
|
||||||
}
|
}
|
||||||
@@ -321,7 +303,6 @@ impl<'a> ConnectConfig<'a> {
|
|||||||
config
|
config
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Enable RSA ciphers even if they might not be supported.
|
|
||||||
pub fn enable_rsa_signatures(mut self) -> Self {
|
pub fn enable_rsa_signatures(mut self) -> Self {
|
||||||
unwrap!(
|
unwrap!(
|
||||||
self.signature_schemes
|
self.signature_schemes
|
||||||
@@ -361,47 +342,22 @@ impl<'a> ConnectConfig<'a> {
|
|||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Configure ALPN protocol names to send in the ClientHello.
|
|
||||||
///
|
|
||||||
/// The server will select one of the offered protocols and echo it back
|
|
||||||
/// in EncryptedExtensions. This is required for endpoints that multiplex
|
|
||||||
/// protocols on a single port (e.g. AWS IoT Core MQTT over port 443).
|
|
||||||
pub fn with_alpn(mut self, protocols: &'a [&'a [u8]]) -> Self {
|
pub fn with_alpn(mut self, protocols: &'a [&'a [u8]]) -> Self {
|
||||||
self.alpn_protocols = Some(protocols);
|
self.alpn_protocols = Some(protocols);
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Configures the maximum plaintext fragment size.
|
|
||||||
///
|
|
||||||
/// This option may help reduce memory size, as smaller fragment lengths require smaller
|
|
||||||
/// read/write buffers. Note that mote-tls does not currently use this option to fragment
|
|
||||||
/// writes. Note that the buffers need to include some overhead over the configured fragment
|
|
||||||
/// length.
|
|
||||||
///
|
|
||||||
/// From [RFC 6066, Section 4. Maximum Fragment Length Negotiation](https://www.rfc-editor.org/rfc/rfc6066#page-8):
|
|
||||||
///
|
|
||||||
/// > Without this extension, TLS specifies a fixed maximum plaintext
|
|
||||||
/// > fragment length of 2^14 bytes. It may be desirable for constrained
|
|
||||||
/// > clients to negotiate a smaller maximum fragment length due to memory
|
|
||||||
/// > limitations or bandwidth limitations.
|
|
||||||
///
|
|
||||||
/// > For example, if the negotiated length is 2^9=512, then, when using currently defined
|
|
||||||
/// > cipher suites ([...]) and null compression, the record-layer output can be at most
|
|
||||||
/// > 805 bytes: 5 bytes of headers, 512 bytes of application data, 256 bytes of padding,
|
|
||||||
/// > and 32 bytes of MAC.
|
|
||||||
pub fn with_max_fragment_length(mut self, max_fragment_length: MaxFragmentLength) -> Self {
|
pub fn with_max_fragment_length(mut self, max_fragment_length: MaxFragmentLength) -> Self {
|
||||||
self.max_fragment_length = Some(max_fragment_length);
|
self.max_fragment_length = Some(max_fragment_length);
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resets the max fragment length to 14 bits (16384).
|
|
||||||
pub fn reset_max_fragment_length(mut self) -> Self {
|
pub fn reset_max_fragment_length(mut self) -> Self {
|
||||||
self.max_fragment_length = None;
|
self.max_fragment_length = None;
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn with_psk(mut self, psk: &'a [u8], identities: &[&'a [u8]]) -> Self {
|
pub fn with_psk(mut self, psk: &'a [u8], identities: &[&'a [u8]]) -> Self {
|
||||||
// TODO: Remove potential panic
|
|
||||||
self.psk = Some((psk, unwrap!(Vec::from_slice(identities).ok())));
|
self.psk = Some((psk, unwrap!(Vec::from_slice(identities).ok())));
|
||||||
self
|
self
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,6 +25,8 @@ use crate::content_types::ContentType;
|
|||||||
use crate::parse_buffer::ParseBuffer;
|
use crate::parse_buffer::ParseBuffer;
|
||||||
use aes_gcm::aead::{AeadCore, AeadInPlace, KeyInit};
|
use aes_gcm::aead::{AeadCore, AeadInPlace, KeyInit};
|
||||||
|
|
||||||
|
// Decrypts an ApplicationData record in-place, then dispatches the inner content type to `cb`.
|
||||||
|
// Plaintext records (Handshake, ChangeCipherSpec) are forwarded to `cb` without decryption.
|
||||||
pub(crate) fn decrypt_record<CipherSuite>(
|
pub(crate) fn decrypt_record<CipherSuite>(
|
||||||
key_schedule: &mut ReadKeySchedule<CipherSuite>,
|
key_schedule: &mut ReadKeySchedule<CipherSuite>,
|
||||||
record: ServerRecord<'_, CipherSuite>,
|
record: ServerRecord<'_, CipherSuite>,
|
||||||
@@ -49,6 +51,7 @@ where
|
|||||||
.decrypt_in_place(&nonce, header.data(), &mut app_data)
|
.decrypt_in_place(&nonce, header.data(), &mut app_data)
|
||||||
.map_err(|_| ProtocolError::CryptoError)?;
|
.map_err(|_| ProtocolError::CryptoError)?;
|
||||||
|
|
||||||
|
// Strip TLS 1.3 inner-content padding: trailing zero bytes before the real content type
|
||||||
let padding = app_data
|
let padding = app_data
|
||||||
.as_slice()
|
.as_slice()
|
||||||
.iter()
|
.iter()
|
||||||
@@ -58,18 +61,17 @@ where
|
|||||||
app_data.truncate(index + 1);
|
app_data.truncate(index + 1);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// The last byte of the decrypted payload is the actual ContentType (RFC 8446 §5.4)
|
||||||
let content_type =
|
let content_type =
|
||||||
ContentType::of(*app_data.as_slice().last().unwrap()).ok_or(ProtocolError::InvalidRecord)?;
|
ContentType::of(*app_data.as_slice().last().unwrap()).ok_or(ProtocolError::InvalidRecord)?;
|
||||||
|
|
||||||
trace!("Decrypting: content type = {:?}", content_type);
|
trace!("Decrypting: content type = {:?}", content_type);
|
||||||
|
|
||||||
// Remove the content type
|
|
||||||
app_data.truncate(app_data.len() - 1);
|
app_data.truncate(app_data.len() - 1);
|
||||||
|
|
||||||
let mut buf = ParseBuffer::new(app_data.as_slice());
|
let mut buf = ParseBuffer::new(app_data.as_slice());
|
||||||
match content_type {
|
match content_type {
|
||||||
ContentType::Handshake => {
|
ContentType::Handshake => {
|
||||||
// Decode potentially coalesced handshake messages
|
|
||||||
while buf.remaining() > 0 {
|
while buf.remaining() > 0 {
|
||||||
let inner = ServerHandshake::read(&mut buf, key_schedule.transcript_hash())?;
|
let inner = ServerHandshake::read(&mut buf, key_schedule.transcript_hash())?;
|
||||||
cb(key_schedule, ServerRecord::Handshake(inner))?;
|
cb(key_schedule, ServerRecord::Handshake(inner))?;
|
||||||
@@ -102,10 +104,6 @@ where
|
|||||||
{
|
{
|
||||||
let client_key = key_schedule.get_key()?;
|
let client_key = key_schedule.get_key()?;
|
||||||
let nonce = key_schedule.get_nonce()?;
|
let nonce = key_schedule.get_nonce()?;
|
||||||
// trace!("encrypt key {:02x?}", client_key);
|
|
||||||
// trace!("encrypt nonce {:02x?}", nonce);
|
|
||||||
// trace!("plaintext {} {:02x?}", buf.len(), buf.as_slice(),);
|
|
||||||
//let crypto = Aes128Gcm::new_varkey(&self.key_schedule.get_client_key()).unwrap();
|
|
||||||
let crypto = <CipherSuite::Cipher as KeyInit>::new(client_key);
|
let crypto = <CipherSuite::Cipher as KeyInit>::new(client_key);
|
||||||
let len = buf.len() + <CipherSuite::Cipher as AeadCore>::TagSize::to_usize();
|
let len = buf.len() + <CipherSuite::Cipher as AeadCore>::TagSize::to_usize();
|
||||||
|
|
||||||
@@ -115,6 +113,7 @@ where
|
|||||||
|
|
||||||
trace!("output size {}", len);
|
trace!("output size {}", len);
|
||||||
let len_bytes = (len as u16).to_be_bytes();
|
let len_bytes = (len as u16).to_be_bytes();
|
||||||
|
// Additional data is the TLS record header (type=ApplicationData, legacy version 0x0303, length)
|
||||||
let additional_data = [
|
let additional_data = [
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
@@ -128,10 +127,12 @@ where
|
|||||||
.map_err(|_| ProtocolError::InvalidApplicationData)
|
.map_err(|_| ProtocolError::InvalidApplicationData)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Ephemeral state held between handshake steps — discarded once the handshake completes.
|
||||||
pub struct Handshake<CipherSuite>
|
pub struct Handshake<CipherSuite>
|
||||||
where
|
where
|
||||||
CipherSuite: TlsCipherSuite,
|
CipherSuite: TlsCipherSuite,
|
||||||
{
|
{
|
||||||
|
// Saved pre-master transcript hash used for Finished after a certificate exchange
|
||||||
traffic_hash: Option<CipherSuite::Hash>,
|
traffic_hash: Option<CipherSuite::Hash>,
|
||||||
secret: Option<EphemeralSecret>,
|
secret: Option<EphemeralSecret>,
|
||||||
certificate_request: Option<CertificateRequest>,
|
certificate_request: Option<CertificateRequest>,
|
||||||
@@ -150,6 +151,7 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// TLS handshake state machine. Drives the client through all stages of the TLS 1.3 handshake.
|
||||||
#[derive(Debug, Clone, Copy, PartialEq)]
|
#[derive(Debug, Clone, Copy, PartialEq)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub enum State {
|
pub enum State {
|
||||||
@@ -470,13 +472,13 @@ where
|
|||||||
ServerHandshake::CertificateRequest(request) => {
|
ServerHandshake::CertificateRequest(request) => {
|
||||||
handshake.certificate_request.replace(request.try_into()?);
|
handshake.certificate_request.replace(request.try_into()?);
|
||||||
}
|
}
|
||||||
ServerHandshake::Finished(finished) => {
|
ServerHandshake::Finished(finished) => {
|
||||||
if !key_schedule.verify_server_finished(&finished)? {
|
if !key_schedule.verify_server_finished(&finished)? {
|
||||||
warn!("Server signature verification failed");
|
warn!("Server signature verification failed");
|
||||||
return Err(ProtocolError::InvalidSignature);
|
return Err(ProtocolError::InvalidSignature);
|
||||||
}
|
}
|
||||||
|
|
||||||
// trace!("server verified {}", verified);
|
// If the server sent a CertificateRequest we must respond with a cert before Finished
|
||||||
state = if handshake.certificate_request.is_some() {
|
state = if handshake.certificate_request.is_some() {
|
||||||
State::ClientCert
|
State::ClientCert
|
||||||
} else {
|
} else {
|
||||||
@@ -517,7 +519,6 @@ where
|
|||||||
.ok_or(ProtocolError::InvalidHandshake)?
|
.ok_or(ProtocolError::InvalidHandshake)?
|
||||||
.request_context;
|
.request_context;
|
||||||
|
|
||||||
// Declare cert before certificate so owned data outlives the CertificateRef that borrows it
|
|
||||||
let cert = crypto_provider.client_cert();
|
let cert = crypto_provider.client_cert();
|
||||||
let mut certificate = CertificateRef::with_context(request_context);
|
let mut certificate = CertificateRef::with_context(request_context);
|
||||||
let next_state = if let Some(ref cert) = cert {
|
let next_state = if let Some(ref cert) = cert {
|
||||||
@@ -547,9 +548,9 @@ where
|
|||||||
{
|
{
|
||||||
let (result, record) = match crypto_provider.signer() {
|
let (result, record) = match crypto_provider.signer() {
|
||||||
Ok((mut signing_key, signature_scheme)) => {
|
Ok((mut signing_key, signature_scheme)) => {
|
||||||
|
// CertificateVerify message format: 64 spaces + context string + \0 + transcript hash (RFC 8446 §4.4.3)
|
||||||
let ctx_str = b"TLS 1.3, client CertificateVerify\x00";
|
let ctx_str = b"TLS 1.3, client CertificateVerify\x00";
|
||||||
|
|
||||||
// 64 (pad) + 34 (ctx) + 48 (SHA-384) = 146 bytes required
|
|
||||||
let mut msg: heapless::Vec<u8, 146> = heapless::Vec::new();
|
let mut msg: heapless::Vec<u8, 146> = heapless::Vec::new();
|
||||||
msg.resize(64, 0x20).map_err(|_| ProtocolError::EncodeError)?;
|
msg.resize(64, 0x20).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
msg.extend_from_slice(ctx_str)
|
msg.extend_from_slice(ctx_str)
|
||||||
@@ -624,6 +625,7 @@ fn client_finished_finalize<CipherSuite>(
|
|||||||
where
|
where
|
||||||
CipherSuite: TlsCipherSuite,
|
CipherSuite: TlsCipherSuite,
|
||||||
{
|
{
|
||||||
|
// Restore the transcript hash captured before the client cert exchange, then derive app traffic secrets
|
||||||
key_schedule.replace_transcript_hash(
|
key_schedule.replace_transcript_hash(
|
||||||
handshake
|
handshake
|
||||||
.traffic_hash
|
.traffic_hash
|
||||||
|
|||||||
@@ -4,16 +4,6 @@ use crate::{
|
|||||||
parse_buffer::{ParseBuffer, ParseError},
|
parse_buffer::{ParseBuffer, ParseError},
|
||||||
};
|
};
|
||||||
|
|
||||||
/// ALPN protocol name list per RFC 7301, Section 3.1.
|
|
||||||
///
|
|
||||||
/// Wire format:
|
|
||||||
/// ```text
|
|
||||||
/// opaque ProtocolName<1..2^8-1>;
|
|
||||||
///
|
|
||||||
/// struct {
|
|
||||||
/// ProtocolName protocol_name_list<2..2^16-1>
|
|
||||||
/// } ProtocolNameList;
|
|
||||||
/// ```
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub struct AlpnProtocolNameList<'a> {
|
pub struct AlpnProtocolNameList<'a> {
|
||||||
@@ -22,12 +12,6 @@ pub struct AlpnProtocolNameList<'a> {
|
|||||||
|
|
||||||
impl<'a> AlpnProtocolNameList<'a> {
|
impl<'a> AlpnProtocolNameList<'a> {
|
||||||
pub fn parse(buf: &mut ParseBuffer<'a>) -> Result<Self, ParseError> {
|
pub fn parse(buf: &mut ParseBuffer<'a>) -> Result<Self, ParseError> {
|
||||||
// We parse but don't store the individual protocol names in a heapless
|
|
||||||
// container — just validate the wire format. The slice reference is kept
|
|
||||||
// for the lifetime of the parse buffer, but since we can't reconstruct
|
|
||||||
// `&[&[u8]]` from a flat buffer without allocation, we store an empty
|
|
||||||
// slice. Callers that need the parsed protocols (server-side) would need
|
|
||||||
// a different approach; for our client-side use we only need encode().
|
|
||||||
let list_len = buf.read_u16()? as usize;
|
let list_len = buf.read_u16()? as usize;
|
||||||
let mut list_buf = buf.slice(list_len)?;
|
let mut list_buf = buf.slice(list_len)?;
|
||||||
|
|
||||||
@@ -43,7 +27,6 @@ impl<'a> AlpnProtocolNameList<'a> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn encode(&self, buf: &mut CryptoBuffer) -> Result<(), ProtocolError> {
|
pub fn encode(&self, buf: &mut CryptoBuffer) -> Result<(), ProtocolError> {
|
||||||
// Outer u16 length prefix for the ProtocolNameList
|
|
||||||
buf.with_u16_length(|buf| {
|
buf.with_u16_length(|buf| {
|
||||||
for protocol in self.protocols {
|
for protocol in self.protocols {
|
||||||
buf.push(protocol.len() as u8)
|
buf.push(protocol.len() as u8)
|
||||||
|
|||||||
@@ -109,8 +109,8 @@ mod tests {
|
|||||||
fn test_parse_empty() {
|
fn test_parse_empty() {
|
||||||
setup();
|
setup();
|
||||||
let buffer = [
|
let buffer = [
|
||||||
0x00, 0x17, // Secp256r1
|
0x00, 0x17,
|
||||||
0x00, 0x00, // key_exchange length = 0 bytes
|
0x00, 0x00,
|
||||||
];
|
];
|
||||||
let result = KeyShareEntry::parse(&mut ParseBuffer::new(&buffer)).unwrap();
|
let result = KeyShareEntry::parse(&mut ParseBuffer::new(&buffer)).unwrap();
|
||||||
|
|
||||||
@@ -122,8 +122,8 @@ mod tests {
|
|||||||
fn test_parse() {
|
fn test_parse() {
|
||||||
setup();
|
setup();
|
||||||
let buffer = [
|
let buffer = [
|
||||||
0x00, 0x17, // Secp256r1
|
0x00, 0x17,
|
||||||
0x00, 0x02, // key_exchange length = 2 bytes
|
0x00, 0x02,
|
||||||
0xAA, 0xBB,
|
0xAA, 0xBB,
|
||||||
];
|
];
|
||||||
let result = KeyShareEntry::parse(&mut ParseBuffer::new(&buffer)).unwrap();
|
let result = KeyShareEntry::parse(&mut ParseBuffer::new(&buffer)).unwrap();
|
||||||
|
|||||||
@@ -4,23 +4,12 @@ use crate::{
|
|||||||
parse_buffer::{ParseBuffer, ParseError},
|
parse_buffer::{ParseBuffer, ParseError},
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Maximum plaintext fragment length
|
|
||||||
///
|
|
||||||
/// RFC 6066, Section 4. Maximum Fragment Length Negotiation
|
|
||||||
/// Without this extension, TLS specifies a fixed maximum plaintext
|
|
||||||
/// fragment length of 2^14 bytes. It may be desirable for constrained
|
|
||||||
/// clients to negotiate a smaller maximum fragment length due to memory
|
|
||||||
/// limitations or bandwidth limitations.
|
|
||||||
#[derive(Debug, Copy, Clone, PartialEq)]
|
#[derive(Debug, Copy, Clone, PartialEq)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub enum MaxFragmentLength {
|
pub enum MaxFragmentLength {
|
||||||
/// 512 bytes
|
|
||||||
Bits9 = 1,
|
Bits9 = 1,
|
||||||
/// 1024 bytes
|
|
||||||
Bits10 = 2,
|
Bits10 = 2,
|
||||||
/// 2048 bytes
|
|
||||||
Bits11 = 3,
|
Bits11 = 3,
|
||||||
/// 4096 bytes
|
|
||||||
Bits12 = 4,
|
Bits12 = 4,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -23,14 +23,12 @@ impl<const N: usize> PreSharedKeyClientHello<'_, N> {
|
|||||||
buf.with_u16_length(|buf| buf.extend_from_slice(identity))
|
buf.with_u16_length(|buf| buf.extend_from_slice(identity))
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
// NOTE: No support for ticket age, set to 0 as recommended by RFC
|
|
||||||
buf.push_u32(0).map_err(|_| ProtocolError::EncodeError)?;
|
buf.push_u32(0).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
})
|
})
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
// NOTE: We encode binders later after computing the transcript.
|
|
||||||
let binders_len = (1 + self.hash_size) * self.identities.len();
|
let binders_len = (1 + self.hash_size) * self.identities.len();
|
||||||
buf.push_u16(binders_len as u16)
|
buf.push_u16(binders_len as u16)
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|||||||
@@ -42,9 +42,6 @@ impl<'a> ServerName<'a> {
|
|||||||
let name_len = buf.read_u16()?;
|
let name_len = buf.read_u16()?;
|
||||||
let name = buf.slice(name_len as usize)?.as_slice();
|
let name = buf.slice(name_len as usize)?.as_slice();
|
||||||
|
|
||||||
// RFC 6066, Section 3. Server Name Indication
|
|
||||||
// The hostname is represented as a byte
|
|
||||||
// string using ASCII encoding without a trailing dot.
|
|
||||||
if name.is_ascii() {
|
if name.is_ascii() {
|
||||||
Ok(ServerName {
|
Ok(ServerName {
|
||||||
name_type,
|
name_type,
|
||||||
@@ -107,12 +104,6 @@ impl<'a, const N: usize> ServerNameList<'a, N> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// RFC 6066, Section 3. Server Name Indication
|
|
||||||
// A server that receives a client hello containing the "server_name"
|
|
||||||
// extension [..]. In this event, the server
|
|
||||||
// SHALL include an extension of type "server_name" in the (extended)
|
|
||||||
// server hello. The "extension_data" field of this extension SHALL be
|
|
||||||
// empty.
|
|
||||||
#[derive(Debug, Clone, PartialEq)]
|
#[derive(Debug, Clone, PartialEq)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub struct ServerNameResponse;
|
pub struct ServerNameResponse;
|
||||||
|
|||||||
@@ -9,26 +9,21 @@ use heapless::Vec;
|
|||||||
#[derive(Debug, Clone, Copy, PartialEq)]
|
#[derive(Debug, Clone, Copy, PartialEq)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub enum SignatureScheme {
|
pub enum SignatureScheme {
|
||||||
/* RSASSA-PKCS1-v1_5 algorithms */
|
|
||||||
RsaPkcs1Sha256,
|
RsaPkcs1Sha256,
|
||||||
RsaPkcs1Sha384,
|
RsaPkcs1Sha384,
|
||||||
RsaPkcs1Sha512,
|
RsaPkcs1Sha512,
|
||||||
|
|
||||||
/* ECDSA algorithms */
|
|
||||||
EcdsaSecp256r1Sha256,
|
EcdsaSecp256r1Sha256,
|
||||||
EcdsaSecp384r1Sha384,
|
EcdsaSecp384r1Sha384,
|
||||||
EcdsaSecp521r1Sha512,
|
EcdsaSecp521r1Sha512,
|
||||||
|
|
||||||
/* RSASSA-PSS algorithms with public key OID rsaEncryption */
|
|
||||||
RsaPssRsaeSha256,
|
RsaPssRsaeSha256,
|
||||||
RsaPssRsaeSha384,
|
RsaPssRsaeSha384,
|
||||||
RsaPssRsaeSha512,
|
RsaPssRsaeSha512,
|
||||||
|
|
||||||
/* EdDSA algorithms */
|
|
||||||
Ed25519,
|
Ed25519,
|
||||||
Ed448,
|
Ed448,
|
||||||
|
|
||||||
/* RSASSA-PSS algorithms with public key OID RSASSA-PSS */
|
|
||||||
RsaPssPssSha256,
|
RsaPssPssSha256,
|
||||||
RsaPssPssSha384,
|
RsaPssPssSha384,
|
||||||
RsaPssPssSha512,
|
RsaPssPssSha512,
|
||||||
@@ -37,22 +32,16 @@ pub enum SignatureScheme {
|
|||||||
Sha224Rsa,
|
Sha224Rsa,
|
||||||
Sha224Dsa,
|
Sha224Dsa,
|
||||||
|
|
||||||
/* Legacy algorithms */
|
|
||||||
RsaPkcs1Sha1,
|
RsaPkcs1Sha1,
|
||||||
EcdsaSha1,
|
EcdsaSha1,
|
||||||
|
|
||||||
/* Brainpool */
|
|
||||||
Sha256BrainpoolP256r1,
|
Sha256BrainpoolP256r1,
|
||||||
Sha384BrainpoolP384r1,
|
Sha384BrainpoolP384r1,
|
||||||
Sha512BrainpoolP512r1,
|
Sha512BrainpoolP512r1,
|
||||||
|
|
||||||
/* ML-DSA */
|
|
||||||
MlDsa44,
|
MlDsa44,
|
||||||
MlDsa65,
|
MlDsa65,
|
||||||
MlDsa87,
|
MlDsa87,
|
||||||
/* Reserved Code Points */
|
|
||||||
//private_use(0xFE00..0xFFFF),
|
|
||||||
//(0xFFFF)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl SignatureScheme {
|
impl SignatureScheme {
|
||||||
|
|||||||
@@ -9,21 +9,18 @@ use crate::{
|
|||||||
#[derive(Copy, Clone, Debug, PartialEq)]
|
#[derive(Copy, Clone, Debug, PartialEq)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
pub enum NamedGroup {
|
pub enum NamedGroup {
|
||||||
/* Elliptic Curve Groups (ECDHE) */
|
|
||||||
Secp256r1,
|
Secp256r1,
|
||||||
Secp384r1,
|
Secp384r1,
|
||||||
Secp521r1,
|
Secp521r1,
|
||||||
X25519,
|
X25519,
|
||||||
X448,
|
X448,
|
||||||
|
|
||||||
/* Finite Field Groups (DHE) */
|
|
||||||
Ffdhe2048,
|
Ffdhe2048,
|
||||||
Ffdhe3072,
|
Ffdhe3072,
|
||||||
Ffdhe4096,
|
Ffdhe4096,
|
||||||
Ffdhe6144,
|
Ffdhe6144,
|
||||||
Ffdhe8192,
|
Ffdhe8192,
|
||||||
|
|
||||||
/* Post-quantum hybrid groups */
|
|
||||||
X25519MLKEM768,
|
X25519MLKEM768,
|
||||||
SecP256r1MLKEM768,
|
SecP256r1MLKEM768,
|
||||||
SecP384r1MLKEM1024,
|
SecP384r1MLKEM1024,
|
||||||
|
|||||||
@@ -4,12 +4,12 @@ macro_rules! extension_group {
|
|||||||
}) => {
|
}) => {
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
#[cfg_attr(feature = "defmt", derive(defmt::Format))]
|
||||||
#[allow(dead_code)] // extension_data may not be used
|
#[allow(dead_code)]
|
||||||
pub enum $name$(<$lt>)? {
|
pub enum $name$(<$lt>)? {
|
||||||
$($extension($extension_data)),+
|
$($extension($extension_data)),+
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(dead_code)] // not all methods are used
|
#[allow(dead_code)]
|
||||||
impl$(<$lt>)? $name$(<$lt>)? {
|
impl$(<$lt>)? $name$(<$lt>)? {
|
||||||
pub fn extension_type(&self) -> crate::extensions::ExtensionType {
|
pub fn extension_type(&self) -> crate::extensions::ExtensionType {
|
||||||
match self {
|
match self {
|
||||||
@@ -26,7 +26,6 @@ macro_rules! extension_group {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn parse(buf: &mut crate::parse_buffer::ParseBuffer$(<$lt>)?) -> Result<Self, crate::ProtocolError> {
|
pub fn parse(buf: &mut crate::parse_buffer::ParseBuffer$(<$lt>)?) -> Result<Self, crate::ProtocolError> {
|
||||||
// Consume extension data even if we don't recognize the extension
|
|
||||||
let extension_type = crate::extensions::ExtensionType::parse(buf);
|
let extension_type = crate::extensions::ExtensionType::parse(buf);
|
||||||
let data_len = buf.read_u16().map_err(|_| crate::ProtocolError::DecodeError)? as usize;
|
let data_len = buf.read_u16().map_err(|_| crate::ProtocolError::DecodeError)? as usize;
|
||||||
let mut ext_data = buf.slice(data_len).map_err(|_| crate::ProtocolError::DecodeError)?;
|
let mut ext_data = buf.slice(data_len).map_err(|_| crate::ProtocolError::DecodeError)?;
|
||||||
@@ -51,11 +50,6 @@ macro_rules! extension_group {
|
|||||||
#[allow(unreachable_patterns)]
|
#[allow(unreachable_patterns)]
|
||||||
other => {
|
other => {
|
||||||
warn!("Read unexpected ExtensionType: {:?}", other);
|
warn!("Read unexpected ExtensionType: {:?}", other);
|
||||||
// Section 4.2. Extensions
|
|
||||||
// If an implementation receives an extension
|
|
||||||
// which it recognizes and which is not specified for the message in
|
|
||||||
// which it appears, it MUST abort the handshake with an
|
|
||||||
// "illegal_parameter" alert.
|
|
||||||
Err(crate::ProtocolError::AbortHandshake(
|
Err(crate::ProtocolError::AbortHandshake(
|
||||||
crate::alert::AlertLevel::Fatal,
|
crate::alert::AlertLevel::Fatal,
|
||||||
crate::alert::AlertDescription::IllegalParameter,
|
crate::alert::AlertDescription::IllegalParameter,
|
||||||
@@ -84,7 +78,6 @@ macro_rules! extension_group {
|
|||||||
.map_err(|_| crate::ProtocolError::DecodeError)?;
|
.map_err(|_| crate::ProtocolError::DecodeError)?;
|
||||||
}
|
}
|
||||||
Err(crate::ProtocolError::UnknownExtensionType) => {
|
Err(crate::ProtocolError::UnknownExtensionType) => {
|
||||||
// ignore unrecognized extension type
|
|
||||||
}
|
}
|
||||||
Err(err) => return Err(err),
|
Err(err) => return Err(err),
|
||||||
}
|
}
|
||||||
@@ -97,6 +90,4 @@ macro_rules! extension_group {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// This re-export makes it possible to omit #[macro_export]
|
|
||||||
// https://stackoverflow.com/a/67140319
|
|
||||||
pub(crate) use extension_group;
|
pub(crate) use extension_group;
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ use crate::extensions::{
|
|||||||
extension_group_macro::extension_group,
|
extension_group_macro::extension_group,
|
||||||
};
|
};
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with CH
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum ClientHelloExtension<'a> {
|
pub enum ClientHelloExtension<'a> {
|
||||||
ServerName(ServerNameList<'a, 1>),
|
ServerName(ServerNameList<'a, 1>),
|
||||||
@@ -43,17 +42,15 @@ extension_group! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with SH
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum ServerHelloExtension<'a> {
|
pub enum ServerHelloExtension<'a> {
|
||||||
KeyShare(KeyShareServerHello<'a>),
|
KeyShare(KeyShareServerHello<'a>),
|
||||||
PreSharedKey(PreSharedKeyServerHello),
|
PreSharedKey(PreSharedKeyServerHello),
|
||||||
Cookie(Unimplemented<'a>), // temporary so we don't trip up on HelloRetryRequests
|
Cookie(Unimplemented<'a>),
|
||||||
SupportedVersions(SupportedVersionsServerHello)
|
SupportedVersions(SupportedVersionsServerHello)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with EE
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum EncryptedExtensionsExtension<'a> {
|
pub enum EncryptedExtensionsExtension<'a> {
|
||||||
ServerName(ServerNameResponse),
|
ServerName(ServerNameResponse),
|
||||||
@@ -68,7 +65,6 @@ extension_group! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with CR
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum CertificateRequestExtension<'a> {
|
pub enum CertificateRequestExtension<'a> {
|
||||||
StatusRequest(Unimplemented<'a>),
|
StatusRequest(Unimplemented<'a>),
|
||||||
@@ -81,7 +77,6 @@ extension_group! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with CT
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum CertificateExtension<'a> {
|
pub enum CertificateExtension<'a> {
|
||||||
StatusRequest(Unimplemented<'a>),
|
StatusRequest(Unimplemented<'a>),
|
||||||
@@ -89,14 +84,12 @@ extension_group! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with NST
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum NewSessionTicketExtension<'a> {
|
pub enum NewSessionTicketExtension<'a> {
|
||||||
EarlyData(Unimplemented<'a>)
|
EarlyData(Unimplemented<'a>)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Source: https://www.rfc-editor.org/rfc/rfc8446#section-4.2 table, rows marked with HRR
|
|
||||||
extension_group! {
|
extension_group! {
|
||||||
pub enum HelloRetryRequestExtension<'a> {
|
pub enum HelloRetryRequestExtension<'a> {
|
||||||
KeyShare(Unimplemented<'a>),
|
KeyShare(Unimplemented<'a>),
|
||||||
|
|||||||
@@ -1,9 +1,7 @@
|
|||||||
use crate::ProtocolError;
|
use crate::ProtocolError;
|
||||||
use crate::buffer::CryptoBuffer;
|
use crate::buffer::CryptoBuffer;
|
||||||
use core::fmt::{Debug, Formatter};
|
use core::fmt::{Debug, Formatter};
|
||||||
//use digest::generic_array::{ArrayLength, GenericArray};
|
|
||||||
use generic_array::{ArrayLength, GenericArray};
|
use generic_array::{ArrayLength, GenericArray};
|
||||||
// use heapless::Vec;
|
|
||||||
|
|
||||||
pub struct PskBinder<N: ArrayLength<u8>> {
|
pub struct PskBinder<N: ArrayLength<u8>> {
|
||||||
pub verify: GenericArray<u8, N>,
|
pub verify: GenericArray<u8, N>,
|
||||||
@@ -25,7 +23,6 @@ impl<N: ArrayLength<u8>> Debug for PskBinder<N> {
|
|||||||
impl<N: ArrayLength<u8>> PskBinder<N> {
|
impl<N: ArrayLength<u8>> PskBinder<N> {
|
||||||
pub(crate) fn encode(&self, buf: &mut CryptoBuffer<'_>) -> Result<(), ProtocolError> {
|
pub(crate) fn encode(&self, buf: &mut CryptoBuffer<'_>) -> Result<(), ProtocolError> {
|
||||||
let len = self.verify.len() as u8;
|
let len = self.verify.len() as u8;
|
||||||
//buf.extend_from_slice(&[len[1], len[2], len[3]]);
|
|
||||||
buf.push(len).map_err(|_| ProtocolError::EncodeError)?;
|
buf.push(len).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
buf.extend_from_slice(&self.verify[..self.verify.len()])
|
buf.extend_from_slice(&self.verify[..self.verify.len()])
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|||||||
@@ -79,7 +79,6 @@ impl<'a> CertificateEntryRef<'a> {
|
|||||||
|
|
||||||
let entry = CertificateEntryRef::X509(cert.as_slice());
|
let entry = CertificateEntryRef::X509(cert.as_slice());
|
||||||
|
|
||||||
// Validate extensions
|
|
||||||
CertificateExtension::parse_vector::<2>(buf)?;
|
CertificateExtension::parse_vector::<2>(buf)?;
|
||||||
|
|
||||||
Ok(entry)
|
Ok(entry)
|
||||||
@@ -103,14 +102,12 @@ impl<'a> CertificateEntryRef<'a> {
|
|||||||
match *self {
|
match *self {
|
||||||
CertificateEntryRef::RawPublicKey(_key) => {
|
CertificateEntryRef::RawPublicKey(_key) => {
|
||||||
todo!("ASN1_subjectPublicKeyInfo encoding?");
|
todo!("ASN1_subjectPublicKeyInfo encoding?");
|
||||||
// buf.with_u24_length(|buf| buf.extend_from_slice(key))?;
|
|
||||||
}
|
}
|
||||||
CertificateEntryRef::X509(cert) => {
|
CertificateEntryRef::X509(cert) => {
|
||||||
buf.with_u24_length(|buf| buf.extend_from_slice(cert))?;
|
buf.with_u24_length(|buf| buf.extend_from_slice(cert))?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Zero extensions for now
|
|
||||||
buf.push_u16(0)?;
|
buf.push_u16(0)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,7 +18,6 @@ impl<'a> CertificateRequestRef<'a> {
|
|||||||
.slice(request_context_len as usize)
|
.slice(request_context_len as usize)
|
||||||
.map_err(|_| ProtocolError::InvalidCertificateRequest)?;
|
.map_err(|_| ProtocolError::InvalidCertificateRequest)?;
|
||||||
|
|
||||||
// Validate extensions
|
|
||||||
let extensions = CertificateRequestExtension::parse_vector::<6>(buf)?;
|
let extensions = CertificateRequestExtension::parse_vector::<6>(buf)?;
|
||||||
|
|
||||||
unused(extensions);
|
unused(extensions);
|
||||||
|
|||||||
@@ -28,13 +28,6 @@ impl<'a> HandshakeVerifyRef<'a> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculations for max. signature sizes:
|
|
||||||
// ecdsaSHA256 -> 6 bytes (ASN.1 structure) + 32-33 bytes (r) + 32-33 bytes (s) = 70..72 bytes
|
|
||||||
// ecdsaSHA384 -> 6 bytes (ASN.1 structure) + 48-49 bytes (r) + 48-49 bytes (s) = 102..104 bytes
|
|
||||||
// Ed25519 -> 6 bytes (ASN.1 structure) + 32-33 bytes (r) + 32-33 bytes (s) = 70..72 bytes
|
|
||||||
// RSA2048 -> 256 bytes
|
|
||||||
// RSA3072 -> 384 bytee
|
|
||||||
// RSA4096 -> 512 bytes
|
|
||||||
#[cfg(feature = "rsa")]
|
#[cfg(feature = "rsa")]
|
||||||
const SIGNATURE_SIZE: usize = 512;
|
const SIGNATURE_SIZE: usize = 512;
|
||||||
#[cfg(not(feature = "rsa"))]
|
#[cfg(not(feature = "rsa"))]
|
||||||
|
|||||||
@@ -62,28 +62,19 @@ where
|
|||||||
buf.extend_from_slice(&self.random)
|
buf.extend_from_slice(&self.random)
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
// session id (empty)
|
// Empty legacy session ID — TLS 1.3 doesn't use it, but the field must be present
|
||||||
buf.push(0).map_err(|_| ProtocolError::EncodeError)?;
|
buf.push(0).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
// cipher suites (2+)
|
// Exactly one cipher suite entry (2-byte length prefix + 2-byte code point)
|
||||||
//buf.extend_from_slice(&((self.config.cipher_suites.len() * 2) as u16).to_be_bytes());
|
|
||||||
//for c in self.config.cipher_suites.iter() {
|
|
||||||
//buf.extend_from_slice(&(*c as u16).to_be_bytes());
|
|
||||||
//}
|
|
||||||
buf.push_u16(2).map_err(|_| ProtocolError::EncodeError)?;
|
buf.push_u16(2).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
buf.push_u16(CipherSuite::CODE_POINT)
|
buf.push_u16(CipherSuite::CODE_POINT)
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
// compression methods, 1 byte of 0
|
// Legacy compression methods: one entry, 0x00 = no compression
|
||||||
buf.push(1).map_err(|_| ProtocolError::EncodeError)?;
|
buf.push(1).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
buf.push(0).map_err(|_| ProtocolError::EncodeError)?;
|
buf.push(0).map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
// extensions (1+)
|
|
||||||
buf.with_u16_length(|buf| {
|
buf.with_u16_length(|buf| {
|
||||||
// Section 4.2.1. Supported Versions
|
|
||||||
// Implementations of this specification MUST send this extension in the
|
|
||||||
// ClientHello containing all versions of TLS which they are prepared to
|
|
||||||
// negotiate
|
|
||||||
ClientHelloExtension::SupportedVersions(SupportedVersionsClientHello {
|
ClientHelloExtension::SupportedVersions(SupportedVersionsClientHello {
|
||||||
versions: Vec::from_slice(&[TLS13]).unwrap(),
|
versions: Vec::from_slice(&[TLS13]).unwrap(),
|
||||||
})
|
})
|
||||||
@@ -129,11 +120,6 @@ where
|
|||||||
.encode(buf)?;
|
.encode(buf)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Section 4.2
|
|
||||||
// When multiple extensions of different types are present, the
|
|
||||||
// extensions MAY appear in any order, with the exception of
|
|
||||||
// "pre_shared_key" which MUST be the last extension in
|
|
||||||
// the ClientHello.
|
|
||||||
if let Some((_, identities)) = &self.config.psk {
|
if let Some((_, identities)) = &self.config.psk {
|
||||||
ClientHelloExtension::PreSharedKey(PreSharedKeyClientHello {
|
ClientHelloExtension::PreSharedKey(PreSharedKeyClientHello {
|
||||||
identities: identities.clone(),
|
identities: identities.clone(),
|
||||||
@@ -154,25 +140,16 @@ where
|
|||||||
transcript: &mut CipherSuite::Hash,
|
transcript: &mut CipherSuite::Hash,
|
||||||
write_key_schedule: &mut WriteKeySchedule<CipherSuite>,
|
write_key_schedule: &mut WriteKeySchedule<CipherSuite>,
|
||||||
) -> Result<(), ProtocolError> {
|
) -> Result<(), ProtocolError> {
|
||||||
// Special case for PSK which needs to:
|
|
||||||
//
|
|
||||||
// 1. Add the client hello without the binders to the transcript
|
|
||||||
// 2. Create the binders for each identity using the transcript
|
|
||||||
// 3. Add the rest of the client hello.
|
|
||||||
//
|
|
||||||
// This causes a few issues since lengths must be correctly inside the payload,
|
|
||||||
// but won't actually be added to the record buffer until the end.
|
|
||||||
if let Some((_, identities)) = &self.config.psk {
|
if let Some((_, identities)) = &self.config.psk {
|
||||||
|
// PSK binders depend on the transcript up to (but not including) the binder values,
|
||||||
|
// so we hash the partial message, compute binders, then hash the remainder (RFC 8446 §4.2.11.2)
|
||||||
let binders_len = identities.len() * (1 + HashOutputSize::<CipherSuite>::to_usize());
|
let binders_len = identities.len() * (1 + HashOutputSize::<CipherSuite>::to_usize());
|
||||||
|
|
||||||
let binders_pos = enc_buf.len() - binders_len;
|
let binders_pos = enc_buf.len() - binders_len;
|
||||||
|
|
||||||
// NOTE: Exclude the binders_len itself from the digest
|
|
||||||
transcript.update(&enc_buf[0..binders_pos - 2]);
|
transcript.update(&enc_buf[0..binders_pos - 2]);
|
||||||
|
|
||||||
// Append after the client hello data. Sizes have already been set.
|
|
||||||
let mut buf = CryptoBuffer::wrap(&mut enc_buf[binders_pos..]);
|
let mut buf = CryptoBuffer::wrap(&mut enc_buf[binders_pos..]);
|
||||||
// Create a binder and encode for each identity
|
|
||||||
for _id in identities {
|
for _id in identities {
|
||||||
let binder = write_key_schedule.create_psk_binder(transcript)?;
|
let binder = write_key_schedule.create_psk_binder(transcript)?;
|
||||||
binder.encode(&mut buf)?;
|
binder.encode(&mut buf)?;
|
||||||
|
|||||||
@@ -2,10 +2,12 @@ use crate::ProtocolError;
|
|||||||
use crate::buffer::CryptoBuffer;
|
use crate::buffer::CryptoBuffer;
|
||||||
use crate::parse_buffer::ParseBuffer;
|
use crate::parse_buffer::ParseBuffer;
|
||||||
use core::fmt::{Debug, Formatter};
|
use core::fmt::{Debug, Formatter};
|
||||||
//use digest::generic_array::{ArrayLength, GenericArray};
|
|
||||||
use generic_array::{ArrayLength, GenericArray};
|
use generic_array::{ArrayLength, GenericArray};
|
||||||
// use heapless::Vec;
|
|
||||||
|
|
||||||
|
/// TLS Finished message: contains an HMAC over the handshake transcript (RFC 8446 §4.4.4).
|
||||||
|
///
|
||||||
|
/// `hash` holds the transcript hash snapshot taken just before this message was received;
|
||||||
|
/// it is `None` when the struct is used for a locally-generated Finished message.
|
||||||
pub struct Finished<N: ArrayLength<u8>> {
|
pub struct Finished<N: ArrayLength<u8>> {
|
||||||
pub verify: GenericArray<u8, N>,
|
pub verify: GenericArray<u8, N>,
|
||||||
pub hash: Option<GenericArray<u8, N>>,
|
pub hash: Option<GenericArray<u8, N>>,
|
||||||
@@ -28,23 +30,12 @@ impl<N: ArrayLength<u8>> Debug for Finished<N> {
|
|||||||
|
|
||||||
impl<N: ArrayLength<u8>> Finished<N> {
|
impl<N: ArrayLength<u8>> Finished<N> {
|
||||||
pub fn parse(buf: &mut ParseBuffer, _len: u32) -> Result<Self, ProtocolError> {
|
pub fn parse(buf: &mut ParseBuffer, _len: u32) -> Result<Self, ProtocolError> {
|
||||||
// info!("finished len: {}", len);
|
|
||||||
let mut verify = GenericArray::default();
|
let mut verify = GenericArray::default();
|
||||||
buf.fill(&mut verify)?;
|
buf.fill(&mut verify)?;
|
||||||
//let hash = GenericArray::from_slice()
|
|
||||||
//let hash: Result<Vec<u8, _>, ()> = buf
|
|
||||||
//.slice(len as usize)
|
|
||||||
//.map_err(|_| ProtocolError::InvalidHandshake)?
|
|
||||||
//.into();
|
|
||||||
// info!("hash {:?}", verify);
|
|
||||||
//let hash = hash.map_err(|_| ProtocolError::InvalidHandshake)?;
|
|
||||||
// info!("hash ng {:?}", verify);
|
|
||||||
Ok(Self { verify, hash: None })
|
Ok(Self { verify, hash: None })
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn encode(&self, buf: &mut CryptoBuffer<'_>) -> Result<(), ProtocolError> {
|
pub(crate) fn encode(&self, buf: &mut CryptoBuffer<'_>) -> Result<(), ProtocolError> {
|
||||||
//let len = self.verify.len().to_be_bytes();
|
|
||||||
//buf.extend_from_slice(&[len[1], len[2], len[3]]);
|
|
||||||
buf.extend_from_slice(&self.verify[..self.verify.len()])
|
buf.extend_from_slice(&self.verify[..self.verify.len()])
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
//use p256::elliptic_curve::AffinePoint;
|
|
||||||
use crate::ProtocolError;
|
use crate::ProtocolError;
|
||||||
use crate::config::TlsCipherSuite;
|
use crate::config::TlsCipherSuite;
|
||||||
use crate::handshake::certificate::CertificateRef;
|
use crate::handshake::certificate::CertificateRef;
|
||||||
@@ -25,6 +24,7 @@ pub mod finished;
|
|||||||
pub mod new_session_ticket;
|
pub mod new_session_ticket;
|
||||||
pub mod server_hello;
|
pub mod server_hello;
|
||||||
|
|
||||||
|
// TLS legacy_record_version field — always 0x0303 for TLS 1.3 compatibility (RFC 8446 §5.1)
|
||||||
const LEGACY_VERSION: u16 = 0x0303;
|
const LEGACY_VERSION: u16 = 0x0303;
|
||||||
|
|
||||||
type Random = [u8; 32];
|
type Random = [u8; 32];
|
||||||
@@ -101,6 +101,7 @@ where
|
|||||||
buf.push(self.handshake_type() as u8)
|
buf.push(self.handshake_type() as u8)
|
||||||
.map_err(|_| ProtocolError::EncodeError)?;
|
.map_err(|_| ProtocolError::EncodeError)?;
|
||||||
|
|
||||||
|
// Handshake message body is preceded by a 3-byte (u24) length (RFC 8446 §4)
|
||||||
buf.with_u24_length(|buf| self.encode_inner(buf))
|
buf.with_u24_length(|buf| self.encode_inner(buf))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -190,6 +191,7 @@ impl<'a, CipherSuite: TlsCipherSuite> ServerHandshake<'a, CipherSuite> {
|
|||||||
let mut handshake = Self::parse(buf)?;
|
let mut handshake = Self::parse(buf)?;
|
||||||
let handshake_end = buf.offset();
|
let handshake_end = buf.offset();
|
||||||
|
|
||||||
|
// Capture the current transcript hash into Finished before we update it with this message
|
||||||
if let ServerHandshake::Finished(finished) = &mut handshake {
|
if let ServerHandshake::Finished(finished) = &mut handshake {
|
||||||
finished.hash.replace(digest.clone().finalize());
|
finished.hash.replace(digest.clone().finalize());
|
||||||
}
|
}
|
||||||
@@ -207,12 +209,10 @@ impl<'a, CipherSuite: TlsCipherSuite> ServerHandshake<'a, CipherSuite> {
|
|||||||
let content_len = buf.read_u24().map_err(|_| ProtocolError::InvalidHandshake)?;
|
let content_len = buf.read_u24().map_err(|_| ProtocolError::InvalidHandshake)?;
|
||||||
|
|
||||||
let handshake = match handshake_type {
|
let handshake = match handshake_type {
|
||||||
//HandshakeType::ClientHello => {}
|
|
||||||
HandshakeType::ServerHello => ServerHandshake::ServerHello(ServerHello::parse(buf)?),
|
HandshakeType::ServerHello => ServerHandshake::ServerHello(ServerHello::parse(buf)?),
|
||||||
HandshakeType::NewSessionTicket => {
|
HandshakeType::NewSessionTicket => {
|
||||||
ServerHandshake::NewSessionTicket(NewSessionTicket::parse(buf)?)
|
ServerHandshake::NewSessionTicket(NewSessionTicket::parse(buf)?)
|
||||||
}
|
}
|
||||||
//HandshakeType::EndOfEarlyData => {}
|
|
||||||
HandshakeType::EncryptedExtensions => {
|
HandshakeType::EncryptedExtensions => {
|
||||||
ServerHandshake::EncryptedExtensions(EncryptedExtensions::parse(buf)?)
|
ServerHandshake::EncryptedExtensions(EncryptedExtensions::parse(buf)?)
|
||||||
}
|
}
|
||||||
@@ -228,8 +228,6 @@ impl<'a, CipherSuite: TlsCipherSuite> ServerHandshake<'a, CipherSuite> {
|
|||||||
HandshakeType::Finished => {
|
HandshakeType::Finished => {
|
||||||
ServerHandshake::Finished(Finished::parse(buf, content_len)?)
|
ServerHandshake::Finished(Finished::parse(buf, content_len)?)
|
||||||
}
|
}
|
||||||
//HandshakeType::KeyUpdate => {}
|
|
||||||
//HandshakeType::MessageHash => {}
|
|
||||||
t => {
|
t => {
|
||||||
warn!("Unimplemented handshake type: {:?}", t);
|
warn!("Unimplemented handshake type: {:?}", t);
|
||||||
return Err(ProtocolError::Unimplemented);
|
return Err(ProtocolError::Unimplemented);
|
||||||
|
|||||||
@@ -17,9 +17,7 @@ pub struct ServerHello<'a> {
|
|||||||
|
|
||||||
impl<'a> ServerHello<'a> {
|
impl<'a> ServerHello<'a> {
|
||||||
pub fn parse(buf: &mut ParseBuffer<'a>) -> Result<ServerHello<'a>, ProtocolError> {
|
pub fn parse(buf: &mut ParseBuffer<'a>) -> Result<ServerHello<'a>, ProtocolError> {
|
||||||
//let mut buf = ParseBuffer::new(&buf[0..content_length]);
|
// legacy_version is always 0x0303 in TLS 1.3; actual version is negotiated via extensions
|
||||||
//let mut buf = ParseBuffer::new(&buf);
|
|
||||||
|
|
||||||
let _version = buf.read_u16().map_err(|_| ProtocolError::InvalidHandshake)?;
|
let _version = buf.read_u16().map_err(|_| ProtocolError::InvalidHandshake)?;
|
||||||
|
|
||||||
let mut random = [0; 32];
|
let mut random = [0; 32];
|
||||||
@@ -29,23 +27,18 @@ impl<'a> ServerHello<'a> {
|
|||||||
.read_u8()
|
.read_u8()
|
||||||
.map_err(|_| ProtocolError::InvalidSessionIdLength)?;
|
.map_err(|_| ProtocolError::InvalidSessionIdLength)?;
|
||||||
|
|
||||||
//info!("sh 1");
|
// Legacy session ID echo: TLS 1.3 servers echo the client's session ID for middlebox compatibility
|
||||||
|
|
||||||
let session_id = buf
|
let session_id = buf
|
||||||
.slice(session_id_length as usize)
|
.slice(session_id_length as usize)
|
||||||
.map_err(|_| ProtocolError::InvalidSessionIdLength)?;
|
.map_err(|_| ProtocolError::InvalidSessionIdLength)?;
|
||||||
//info!("sh 2");
|
|
||||||
|
|
||||||
let cipher_suite = CipherSuite::parse(buf).map_err(|_| ProtocolError::InvalidCipherSuite)?;
|
let cipher_suite = CipherSuite::parse(buf).map_err(|_| ProtocolError::InvalidCipherSuite)?;
|
||||||
|
|
||||||
////info!("sh 3");
|
// compression_method: always 0x00 in TLS 1.3
|
||||||
// skip compression method, it's 0.
|
|
||||||
buf.read_u8()?;
|
buf.read_u8()?;
|
||||||
|
|
||||||
let extensions = ServerHelloExtension::parse_vector(buf)?;
|
let extensions = ServerHelloExtension::parse_vector(buf)?;
|
||||||
|
|
||||||
// debug!("server random {:x}", random);
|
|
||||||
// debug!("server session-id {:x}", session_id.as_slice());
|
|
||||||
debug!("server cipher_suite {:?}", cipher_suite);
|
debug!("server cipher_suite {:?}", cipher_suite);
|
||||||
debug!("server extensions {:?}", extensions);
|
debug!("server extensions {:?}", extensions);
|
||||||
|
|
||||||
@@ -63,6 +56,7 @@ impl<'a> ServerHello<'a> {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Performs ECDH with the server's key share to derive the shared secret used in the handshake.
|
||||||
pub fn calculate_shared_secret(&self, secret: &EphemeralSecret) -> Option<SharedSecret> {
|
pub fn calculate_shared_secret(&self, secret: &EphemeralSecret) -> Option<SharedSecret> {
|
||||||
let server_key_share = self.key_share()?;
|
let server_key_share = self.key_share()?;
|
||||||
let server_public_key = PublicKey::from_sec1_bytes(server_key_share.opaque).ok()?;
|
let server_public_key = PublicKey::from_sec1_bytes(server_key_share.opaque).ok()?;
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ pub type LabelBufferSize<CipherSuite> = <CipherSuite as TlsCipherSuite>::LabelBu
|
|||||||
|
|
||||||
pub type IvArray<CipherSuite> = GenericArray<u8, <CipherSuite as TlsCipherSuite>::IvLen>;
|
pub type IvArray<CipherSuite> = GenericArray<u8, <CipherSuite as TlsCipherSuite>::IvLen>;
|
||||||
pub type KeyArray<CipherSuite> = GenericArray<u8, <CipherSuite as TlsCipherSuite>::KeyLen>;
|
pub type KeyArray<CipherSuite> = GenericArray<u8, <CipherSuite as TlsCipherSuite>::KeyLen>;
|
||||||
|
/// Hash-sized byte array, used as the HKDF secret at each key schedule stage.
|
||||||
pub type HashArray<CipherSuite> = GenericArray<u8, HashOutputSize<CipherSuite>>;
|
pub type HashArray<CipherSuite> = GenericArray<u8, HashOutputSize<CipherSuite>>;
|
||||||
|
|
||||||
type Hkdf<CipherSuite> = hkdf::Hkdf<
|
type Hkdf<CipherSuite> = hkdf::Hkdf<
|
||||||
@@ -43,17 +44,19 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HKDF-Expand-Label as defined in RFC 8446 §7.1
|
||||||
fn make_expanded_hkdf_label<N: ArrayLength<u8>>(
|
fn make_expanded_hkdf_label<N: ArrayLength<u8>>(
|
||||||
&self,
|
&self,
|
||||||
label: &[u8],
|
label: &[u8],
|
||||||
context_type: ContextType<CipherSuite>,
|
context_type: ContextType<CipherSuite>,
|
||||||
) -> Result<GenericArray<u8, N>, ProtocolError> {
|
) -> Result<GenericArray<u8, N>, ProtocolError> {
|
||||||
//info!("make label {:?} {}", label, len);
|
|
||||||
let mut hkdf_label = heapless_typenum::Vec::<u8, LabelBufferSize<CipherSuite>>::new();
|
let mut hkdf_label = heapless_typenum::Vec::<u8, LabelBufferSize<CipherSuite>>::new();
|
||||||
|
// Length field: desired output length as u16 big-endian
|
||||||
hkdf_label
|
hkdf_label
|
||||||
.extend_from_slice(&N::to_u16().to_be_bytes())
|
.extend_from_slice(&N::to_u16().to_be_bytes())
|
||||||
.map_err(|()| ProtocolError::InternalError)?;
|
.map_err(|()| ProtocolError::InternalError)?;
|
||||||
|
|
||||||
|
// TLS 1.3 labels are prefixed with "tls13 " (6 bytes)
|
||||||
let label_len = 6 + label.len() as u8;
|
let label_len = 6 + label.len() as u8;
|
||||||
hkdf_label
|
hkdf_label
|
||||||
.extend_from_slice(&label_len.to_be_bytes())
|
.extend_from_slice(&label_len.to_be_bytes())
|
||||||
@@ -80,11 +83,9 @@ where
|
|||||||
}
|
}
|
||||||
|
|
||||||
let mut okm = GenericArray::default();
|
let mut okm = GenericArray::default();
|
||||||
//info!("label {:x?}", label);
|
|
||||||
self.as_ref()?
|
self.as_ref()?
|
||||||
.expand(&hkdf_label, &mut okm)
|
.expand(&hkdf_label, &mut okm)
|
||||||
.map_err(|_| ProtocolError::CryptoError)?;
|
.map_err(|_| ProtocolError::CryptoError)?;
|
||||||
//info!("expand {:x?}", okm);
|
|
||||||
Ok(okm)
|
Ok(okm)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -108,6 +109,7 @@ where
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HKDF-Extract, chaining the previous stage's secret as the salt (RFC 8446 §7.1)
|
||||||
fn initialize(&mut self, ikm: &[u8]) {
|
fn initialize(&mut self, ikm: &[u8]) {
|
||||||
let (secret, hkdf) = Hkdf::<CipherSuite>::extract(Some(self.secret.as_ref()), ikm);
|
let (secret, hkdf) = Hkdf::<CipherSuite>::extract(Some(self.secret.as_ref()), ikm);
|
||||||
self.hkdf.replace(hkdf);
|
self.hkdf.replace(hkdf);
|
||||||
@@ -123,6 +125,7 @@ where
|
|||||||
.make_expanded_hkdf_label::<HashOutputSize<CipherSuite>>(label, context_type)
|
.make_expanded_hkdf_label::<HashOutputSize<CipherSuite>>(label, context_type)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// "Derive-Secret(Secret, "derived", "")" — used to chain stages per RFC 8446 §7.1
|
||||||
fn derived(&mut self) -> Result<(), ProtocolError> {
|
fn derived(&mut self) -> Result<(), ProtocolError> {
|
||||||
self.secret = self.derive_secret(b"derived", ContextType::empty_hash())?;
|
self.secret = self.derive_secret(b"derived", ContextType::empty_hash())?;
|
||||||
Ok(())
|
Ok(())
|
||||||
@@ -178,6 +181,7 @@ where
|
|||||||
Hkdf::<CipherSuite>::from_prk(&secret).map_err(|_| ProtocolError::InternalError)?;
|
Hkdf::<CipherSuite>::from_prk(&secret).map_err(|_| ProtocolError::InternalError)?;
|
||||||
|
|
||||||
self.traffic_secret.replace(traffic_secret);
|
self.traffic_secret.replace(traffic_secret);
|
||||||
|
// Derive per-record key and IV from the traffic secret (RFC 8446 §7.3)
|
||||||
self.key = self
|
self.key = self
|
||||||
.traffic_secret
|
.traffic_secret
|
||||||
.make_expanded_hkdf_label(b"key", ContextType::None)?;
|
.make_expanded_hkdf_label(b"key", ContextType::None)?;
|
||||||
@@ -294,11 +298,9 @@ where
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn get_nonce(counter: u64, iv: &IvArray<CipherSuite>) -> IvArray<CipherSuite> {
|
fn get_nonce(counter: u64, iv: &IvArray<CipherSuite>) -> IvArray<CipherSuite> {
|
||||||
//info!("counter = {} {:x?}", counter, &counter.to_be_bytes(),);
|
// Per-record nonce: XOR the static IV with the zero-padded sequence counter (RFC 8446 §5.3)
|
||||||
let counter = Self::pad::<CipherSuite::IvLen>(&counter.to_be_bytes());
|
let counter = Self::pad::<CipherSuite::IvLen>(&counter.to_be_bytes());
|
||||||
|
|
||||||
//info!("counter = {:x?}", counter);
|
|
||||||
// info!("iv = {:x?}", iv);
|
|
||||||
|
|
||||||
let mut nonce = GenericArray::default();
|
let mut nonce = GenericArray::default();
|
||||||
|
|
||||||
@@ -310,21 +312,14 @@ where
|
|||||||
nonce[index] = l ^ r;
|
nonce[index] = l ^ r;
|
||||||
}
|
}
|
||||||
|
|
||||||
//debug!("nonce {:x?}", nonce);
|
|
||||||
|
|
||||||
nonce
|
nonce
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Right-aligns `input` bytes in a zero-padded array of length N (big-endian padding)
|
||||||
fn pad<N: ArrayLength<u8>>(input: &[u8]) -> GenericArray<u8, N> {
|
fn pad<N: ArrayLength<u8>>(input: &[u8]) -> GenericArray<u8, N> {
|
||||||
// info!("padding input = {:x?}", input);
|
|
||||||
let mut padded = GenericArray::default();
|
let mut padded = GenericArray::default();
|
||||||
for (index, byte) in input.iter().rev().enumerate() {
|
for (index, byte) in input.iter().rev().enumerate() {
|
||||||
/*info!(
|
|
||||||
"{} pad {}={:x?}",
|
|
||||||
index,
|
|
||||||
((N::to_usize() - index) - 1),
|
|
||||||
*byte
|
|
||||||
);*/
|
|
||||||
padded[(N::to_usize() - index) - 1] = *byte;
|
padded[(N::to_usize() - index) - 1] = *byte;
|
||||||
}
|
}
|
||||||
padded
|
padded
|
||||||
@@ -334,8 +329,8 @@ where
|
|||||||
GenericArray::default()
|
GenericArray::default()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initializes the early secrets with a callback for any PSK binders
|
|
||||||
pub fn initialize_early_secret(&mut self, psk: Option<&[u8]>) -> Result<(), ProtocolError> {
|
pub fn initialize_early_secret(&mut self, psk: Option<&[u8]>) -> Result<(), ProtocolError> {
|
||||||
|
// IKM is 0-bytes when no PSK is used — still required to derive the binder key
|
||||||
self.shared.initialize(
|
self.shared.initialize(
|
||||||
#[allow(clippy::or_fun_call)]
|
#[allow(clippy::or_fun_call)]
|
||||||
psk.unwrap_or(Self::zero().as_slice()),
|
psk.unwrap_or(Self::zero().as_slice()),
|
||||||
@@ -358,10 +353,9 @@ where
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn initialize_master_secret(&mut self) -> Result<(), ProtocolError> {
|
pub fn initialize_master_secret(&mut self) -> Result<(), ProtocolError> {
|
||||||
|
// IKM is all-zeros at the master secret stage (RFC 8446 §7.1)
|
||||||
self.shared.initialize(Self::zero().as_slice());
|
self.shared.initialize(Self::zero().as_slice());
|
||||||
|
|
||||||
//let context = self.transcript_hash.as_ref().unwrap().clone().finalize();
|
|
||||||
//info!("Derive keys, hash: {:x?}", context);
|
|
||||||
|
|
||||||
self.calculate_traffic_secrets(b"c ap traffic", b"s ap traffic")?;
|
self.calculate_traffic_secrets(b"c ap traffic", b"s ap traffic")?;
|
||||||
self.shared.derived()
|
self.shared.derived()
|
||||||
@@ -471,9 +465,6 @@ where
|
|||||||
&self,
|
&self,
|
||||||
finished: &Finished<HashOutputSize<CipherSuite>>,
|
finished: &Finished<HashOutputSize<CipherSuite>>,
|
||||||
) -> Result<bool, ProtocolError> {
|
) -> Result<bool, ProtocolError> {
|
||||||
//info!("verify server finished: {:x?}", finished.verify);
|
|
||||||
//self.client_traffic_secret.as_ref().unwrap().expand()
|
|
||||||
//info!("size ===> {}", D::OutputSize::to_u16());
|
|
||||||
let key = self
|
let key = self
|
||||||
.state
|
.state
|
||||||
.traffic_secret
|
.traffic_secret
|
||||||
@@ -481,7 +472,6 @@ where
|
|||||||
b"finished",
|
b"finished",
|
||||||
ContextType::None,
|
ContextType::None,
|
||||||
)?;
|
)?;
|
||||||
// info!("hmac sign key {:x?}", key);
|
|
||||||
let mut hmac = SimpleHmac::<CipherSuite::Hash>::new_from_slice(&key)
|
let mut hmac = SimpleHmac::<CipherSuite::Hash>::new_from_slice(&key)
|
||||||
.map_err(|_| ProtocolError::InternalError)?;
|
.map_err(|_| ProtocolError::InternalError)?;
|
||||||
Mac::update(
|
Mac::update(
|
||||||
@@ -491,9 +481,6 @@ where
|
|||||||
ProtocolError::InternalError
|
ProtocolError::InternalError
|
||||||
})?,
|
})?,
|
||||||
);
|
);
|
||||||
//let code = hmac.clone().finalize().into_bytes();
|
|
||||||
Ok(hmac.verify(&finished.verify).is_ok())
|
Ok(hmac.verify(&finished.verify).is_ok())
|
||||||
//info!("verified {:?}", verified);
|
|
||||||
//unimplemented!()
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
44
src/lib.rs
44
src/lib.rs
@@ -4,49 +4,10 @@
|
|||||||
clippy::module_name_repetitions,
|
clippy::module_name_repetitions,
|
||||||
clippy::cast_possible_truncation,
|
clippy::cast_possible_truncation,
|
||||||
clippy::cast_sign_loss,
|
clippy::cast_sign_loss,
|
||||||
clippy::missing_errors_doc // TODO
|
clippy::missing_errors_doc
|
||||||
)]
|
)]
|
||||||
|
|
||||||
/*!
|
|
||||||
# Example
|
|
||||||
|
|
||||||
```
|
|
||||||
use mote_tls::*;
|
|
||||||
use embedded_io_adapters::tokio_1::FromTokio;
|
|
||||||
use rand::rngs::OsRng;
|
|
||||||
use tokio::net::TcpStream;
|
|
||||||
|
|
||||||
#[tokio::main]
|
|
||||||
async fn main() {
|
|
||||||
let stream = TcpStream::connect("google.com:443")
|
|
||||||
.await
|
|
||||||
.expect("error creating TCP connection");
|
|
||||||
|
|
||||||
println!("TCP connection opened");
|
|
||||||
let mut read_record_buffer = [0; 16384];
|
|
||||||
let mut write_record_buffer = [0; 16384];
|
|
||||||
let config = ConnectConfig::new().with_server_name("google.com").enable_rsa_signatures();
|
|
||||||
let mut tls = SecureStream::new(
|
|
||||||
FromTokio::new(stream),
|
|
||||||
&mut read_record_buffer,
|
|
||||||
&mut write_record_buffer,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Allows disabling cert verification, in case you are using PSK and don't need it, or are just testing.
|
|
||||||
// otherwise, use mote_tls::cert_verify::CertVerifier, which only works on std for now.
|
|
||||||
tls.open(ConnectContext::new(
|
|
||||||
&config,
|
|
||||||
SkipVerifyProvider::new::<Aes128GcmSha256>(OsRng),
|
|
||||||
))
|
|
||||||
.await
|
|
||||||
.expect("error establishing TLS connection");
|
|
||||||
|
|
||||||
println!("TLS session opened");
|
|
||||||
}
|
|
||||||
```
|
|
||||||
*/
|
|
||||||
|
|
||||||
// This mod MUST go first, so that the others see its macros.
|
|
||||||
pub(crate) mod fmt;
|
pub(crate) mod fmt;
|
||||||
|
|
||||||
use parse_buffer::ParseError;
|
use parse_buffer::ParseError;
|
||||||
@@ -163,7 +124,4 @@ mod stdlib {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// An internal function to mark an unused value.
|
|
||||||
///
|
|
||||||
/// All calls to this should be removed before 1.x.
|
|
||||||
fn unused<T>(_: T) {}
|
fn unused<T>(_: T) {}
|
||||||
|
|||||||
@@ -268,7 +268,6 @@ fn get_certificate_tlv_bytes<'a>(input: &[u8]) -> der::Result<&[u8]> {
|
|||||||
let header = der::Header::peek(&mut reader)?;
|
let header = der::Header::peek(&mut reader)?;
|
||||||
header.tag().assert_eq(der::Tag::Sequence)?;
|
header.tag().assert_eq(der::Tag::Sequence)?;
|
||||||
|
|
||||||
// Should we read the remaining two fields and call reader.finish() just be certain here?
|
|
||||||
reader.tlv_bytes()
|
reader.tlv_bytes()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -68,7 +68,6 @@ impl<'b> ParseBuffer<'b> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn read_u16(&mut self) -> Result<u16, ParseError> {
|
pub fn read_u16(&mut self) -> Result<u16, ParseError> {
|
||||||
//info!("pos={} len={}", self.pos, self.buffer.len());
|
|
||||||
if self.pos + 2 <= self.buffer.len() {
|
if self.pos + 2 <= self.buffer.len() {
|
||||||
let value = u16::from_be_bytes([self.buffer[self.pos], self.buffer[self.pos + 1]]);
|
let value = u16::from_be_bytes([self.buffer[self.pos], self.buffer[self.pos + 1]]);
|
||||||
self.pos += 2;
|
self.pos += 2;
|
||||||
@@ -112,7 +111,6 @@ impl<'b> ParseBuffer<'b> {
|
|||||||
if self.pos + dest.len() <= self.buffer.len() {
|
if self.pos + dest.len() <= self.buffer.len() {
|
||||||
dest.copy_from_slice(&self.buffer[self.pos..self.pos + dest.len()]);
|
dest.copy_from_slice(&self.buffer[self.pos..self.pos + dest.len()]);
|
||||||
self.pos += dest.len();
|
self.pos += dest.len();
|
||||||
// info!("Copied {} bytes", dest.len());
|
|
||||||
Ok(())
|
Ok(())
|
||||||
} else {
|
} else {
|
||||||
Err(ParseError::InsufficientBytes)
|
Err(ParseError::InsufficientBytes)
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
/// A reference to consume bytes from the internal buffer.
|
|
||||||
#[must_use]
|
#[must_use]
|
||||||
pub struct ReadBuffer<'a> {
|
pub struct ReadBuffer<'a> {
|
||||||
data: &'a [u8],
|
data: &'a [u8],
|
||||||
@@ -31,25 +30,21 @@ impl<'a> ReadBuffer<'a> {
|
|||||||
self.len() == 0
|
self.len() == 0
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Consumes and returns a slice of at most `count` bytes.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn peek(&mut self, count: usize) -> &'a [u8] {
|
pub fn peek(&mut self, count: usize) -> &'a [u8] {
|
||||||
let count = self.len().min(count);
|
let count = self.len().min(count);
|
||||||
let start = self.consumed;
|
let start = self.consumed;
|
||||||
|
|
||||||
// We mark the buffer used to prevent dropping unconsumed bytes.
|
|
||||||
self.used = true;
|
self.used = true;
|
||||||
|
|
||||||
&self.data[start..start + count]
|
&self.data[start..start + count]
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Consumes and returns a slice of at most `count` bytes.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn peek_all(&mut self) -> &'a [u8] {
|
pub fn peek_all(&mut self) -> &'a [u8] {
|
||||||
self.peek(self.len())
|
self.peek(self.len())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Consumes and returns a slice of at most `count` bytes.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn pop(&mut self, count: usize) -> &'a [u8] {
|
pub fn pop(&mut self, count: usize) -> &'a [u8] {
|
||||||
let count = self.len().min(count);
|
let count = self.len().min(count);
|
||||||
@@ -60,19 +55,16 @@ impl<'a> ReadBuffer<'a> {
|
|||||||
&self.data[start..start + count]
|
&self.data[start..start + count]
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Consumes and returns the internal buffer.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn pop_all(&mut self) -> &'a [u8] {
|
pub fn pop_all(&mut self) -> &'a [u8] {
|
||||||
self.pop(self.len())
|
self.pop(self.len())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Drops the reference and restores internal buffer.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn revert(self) {
|
pub fn revert(self) {
|
||||||
core::mem::forget(self);
|
core::mem::forget(self);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Tries to fills the buffer by consuming and copying bytes into it.
|
|
||||||
#[inline]
|
#[inline]
|
||||||
pub fn pop_into(&mut self, buf: &mut [u8]) -> usize {
|
pub fn pop_into(&mut self, buf: &mut [u8]) -> usize {
|
||||||
let to_copy = self.pop(buf.len());
|
let to_copy = self.pop(buf.len());
|
||||||
@@ -89,7 +81,6 @@ impl Drop for ReadBuffer<'_> {
|
|||||||
*self.decrypted_consumed += if self.used {
|
*self.decrypted_consumed += if self.used {
|
||||||
self.consumed
|
self.consumed
|
||||||
} else {
|
} else {
|
||||||
// Consume all if dropped unused
|
|
||||||
self.data.len()
|
self.data.len()
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,7 +18,6 @@ pub type Encrypted = bool;
|
|||||||
#[allow(clippy::large_enum_variant)]
|
#[allow(clippy::large_enum_variant)]
|
||||||
pub enum ClientRecord<'config, 'a, CipherSuite>
|
pub enum ClientRecord<'config, 'a, CipherSuite>
|
||||||
where
|
where
|
||||||
// N: ArrayLength<u8>,
|
|
||||||
CipherSuite: TlsCipherSuite,
|
CipherSuite: TlsCipherSuite,
|
||||||
{
|
{
|
||||||
Handshake(ClientHandshake<'config, 'a, CipherSuite>, Encrypted),
|
Handshake(ClientHandshake<'config, 'a, CipherSuite>, Encrypted),
|
||||||
@@ -80,7 +79,6 @@ impl ClientRecordHeader {
|
|||||||
|
|
||||||
impl<'config, CipherSuite> ClientRecord<'config, '_, CipherSuite>
|
impl<'config, CipherSuite> ClientRecord<'config, '_, CipherSuite>
|
||||||
where
|
where
|
||||||
//N: ArrayLength<u8>,
|
|
||||||
CipherSuite: TlsCipherSuite,
|
CipherSuite: TlsCipherSuite,
|
||||||
{
|
{
|
||||||
pub fn header(&self) -> ClientRecordHeader {
|
pub fn header(&self) -> ClientRecordHeader {
|
||||||
@@ -158,12 +156,10 @@ impl RecordHeader {
|
|||||||
pub const LEN: usize = 5;
|
pub const LEN: usize = 5;
|
||||||
|
|
||||||
pub fn content_type(&self) -> ContentType {
|
pub fn content_type(&self) -> ContentType {
|
||||||
// Content type already validated in read
|
|
||||||
unwrap!(ContentType::of(self.header[0]))
|
unwrap!(ContentType::of(self.header[0]))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn content_length(&self) -> usize {
|
pub fn content_length(&self) -> usize {
|
||||||
// Content length already validated in read
|
|
||||||
u16::from_be_bytes([self.header[3], self.header[4]]) as usize
|
u16::from_be_bytes([self.header[3], self.header[4]]) as usize
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -220,5 +216,4 @@ impl<'a, CipherSuite: TlsCipherSuite> ServerRecord<'a, CipherSuite> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
//pub fn parse<D: Digest>(buf: &[u8]) -> Result<Self, ProtocolError> {}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,24 +8,25 @@ use crate::{
|
|||||||
record::{RecordHeader, ServerRecord},
|
record::{RecordHeader, ServerRecord},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/// Stateful reader that reassembles TLS records from a byte stream into the shared receive buffer.
|
||||||
|
///
|
||||||
|
/// `decoded` tracks how many bytes at the start of `buf` have already been handed to the caller;
|
||||||
|
/// `pending` tracks bytes that have been read from the transport but not yet consumed as a record.
|
||||||
pub struct RecordReader<'a> {
|
pub struct RecordReader<'a> {
|
||||||
pub(crate) buf: &'a mut [u8],
|
pub(crate) buf: &'a mut [u8],
|
||||||
/// The number of decoded bytes in the buffer
|
|
||||||
decoded: usize,
|
decoded: usize,
|
||||||
/// The number of read but not yet decoded bytes in the buffer
|
|
||||||
pending: usize,
|
pending: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct RecordReaderBorrowMut<'a> {
|
pub struct RecordReaderBorrowMut<'a> {
|
||||||
pub(crate) buf: &'a mut [u8],
|
pub(crate) buf: &'a mut [u8],
|
||||||
/// The number of decoded bytes in the buffer
|
|
||||||
decoded: &'a mut usize,
|
decoded: &'a mut usize,
|
||||||
/// The number of read but not yet decoded bytes in the buffer
|
|
||||||
pending: &'a mut usize,
|
pending: &'a mut usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'a> RecordReader<'a> {
|
impl<'a> RecordReader<'a> {
|
||||||
pub fn new(buf: &'a mut [u8]) -> Self {
|
pub fn new(buf: &'a mut [u8]) -> Self {
|
||||||
|
// TLS 1.3 max plaintext record is 16 384 bytes + 256 bytes overhead = 16 640 bytes
|
||||||
if buf.len() < 16640 {
|
if buf.len() < 16640 {
|
||||||
warn!("Read buffer is smaller than 16640 bytes, which may cause problems!");
|
warn!("Read buffer is smaller than 16640 bytes, which may cause problems!");
|
||||||
}
|
}
|
||||||
@@ -248,6 +249,7 @@ fn ensure_contiguous(
|
|||||||
pending: &mut usize,
|
pending: &mut usize,
|
||||||
len: usize,
|
len: usize,
|
||||||
) -> Result<(), ProtocolError> {
|
) -> Result<(), ProtocolError> {
|
||||||
|
// If the next record would overflow the end of the buffer, rotate unconsumed bytes to the front
|
||||||
if *decoded + len > buf.len() {
|
if *decoded + len > buf.len() {
|
||||||
if len > buf.len() {
|
if len > buf.len() {
|
||||||
error!(
|
error!(
|
||||||
@@ -310,24 +312,20 @@ mod tests {
|
|||||||
fn can_read_blocking_case(chunk_size: usize) {
|
fn can_read_blocking_case(chunk_size: usize) {
|
||||||
let mut transport = ChunkRead(
|
let mut transport = ChunkRead(
|
||||||
&[
|
&[
|
||||||
// Header
|
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
0x03,
|
0x03,
|
||||||
0x00,
|
0x00,
|
||||||
0x04,
|
0x04,
|
||||||
// Data
|
|
||||||
0xde,
|
0xde,
|
||||||
0xad,
|
0xad,
|
||||||
0xbe,
|
0xbe,
|
||||||
0xef,
|
0xef,
|
||||||
// Header
|
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
0x03,
|
0x03,
|
||||||
0x00,
|
0x00,
|
||||||
0x02,
|
0x02,
|
||||||
// Data
|
|
||||||
0xaa,
|
0xaa,
|
||||||
0xbb,
|
0xbb,
|
||||||
],
|
],
|
||||||
@@ -370,30 +368,26 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn can_read_blocking_must_rotate_buffer() {
|
fn can_read_blocking_must_rotate_buffer() {
|
||||||
let mut transport = [
|
let mut transport = [
|
||||||
// Header
|
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
0x03,
|
0x03,
|
||||||
0x00,
|
0x00,
|
||||||
0x04,
|
0x04,
|
||||||
// Data
|
|
||||||
0xde,
|
0xde,
|
||||||
0xad,
|
0xad,
|
||||||
0xbe,
|
0xbe,
|
||||||
0xef,
|
0xef,
|
||||||
// Header
|
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
0x03,
|
0x03,
|
||||||
0x00,
|
0x00,
|
||||||
0x02,
|
0x02,
|
||||||
// Data
|
|
||||||
0xaa,
|
0xaa,
|
||||||
0xbb,
|
0xbb,
|
||||||
]
|
]
|
||||||
.as_slice();
|
.as_slice();
|
||||||
|
|
||||||
let mut buf = [0; 4]; // cannot contain both data portions
|
let mut buf = [0; 4];
|
||||||
let mut reader = RecordReader::new(&mut buf);
|
let mut reader = RecordReader::new(&mut buf);
|
||||||
let mut key_schedule = KeySchedule::<Aes128GcmSha256>::new();
|
let mut key_schedule = KeySchedule::<Aes128GcmSha256>::new();
|
||||||
|
|
||||||
@@ -429,13 +423,11 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn can_read_empty_record() {
|
fn can_read_empty_record() {
|
||||||
let mut transport = [
|
let mut transport = [
|
||||||
// Header
|
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
0x03,
|
0x03,
|
||||||
0x00,
|
0x00,
|
||||||
0x00,
|
0x00,
|
||||||
// Header
|
|
||||||
ContentType::ApplicationData as u8,
|
ContentType::ApplicationData as u8,
|
||||||
0x03,
|
0x03,
|
||||||
0x03,
|
0x03,
|
||||||
|
|||||||
@@ -1,36 +1,24 @@
|
|||||||
//! Flush policy for TLS sockets.
|
|
||||||
//!
|
|
||||||
//! Two strategies are provided:
|
|
||||||
//! - `Relaxed`: close the TLS encryption buffer and hand the data to the transport
|
|
||||||
//! delegate without forcing a transport-level flush.
|
|
||||||
//! - `Strict`: in addition to handing the data to the transport delegate, also
|
|
||||||
//! request a flush of the transport. For TCP transports this typically means
|
|
||||||
//! waiting for an ACK (e.g. on embassy TCP sockets) before considering the
|
|
||||||
//! data fully flushed.
|
|
||||||
|
|
||||||
/// Policy controlling how TLS layer flushes encrypted data to the transport.
|
/// Controls whether `flush()` calls also flush the underlying transport.
|
||||||
|
///
|
||||||
|
/// `Strict` (the default) ensures bytes reach the network immediately after every record.
|
||||||
|
/// `Relaxed` leaves transport flushing to the caller, which can reduce syscall overhead.
|
||||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||||
pub enum FlushPolicy {
|
pub enum FlushPolicy {
|
||||||
/// Close the TLS encryption buffer and pass bytes to the transport delegate.
|
/// Only encrypt and hand bytes to the transport; do not call `transport.flush()`.
|
||||||
/// Do not force a transport-level flush or wait for an ACK.
|
|
||||||
Relaxed,
|
Relaxed,
|
||||||
|
|
||||||
/// In addition to passing bytes to the transport delegate, request a
|
/// Call `transport.flush()` after writing each TLS record.
|
||||||
/// transport-level flush and wait for confirmation (ACK) before returning.
|
|
||||||
Strict,
|
Strict,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl FlushPolicy {
|
impl FlushPolicy {
|
||||||
/// Returns true when the transport delegate should be explicitly flushed.
|
|
||||||
///
|
|
||||||
/// Relaxed -> false, Strict -> true.
|
|
||||||
pub fn flush_transport(&self) -> bool {
|
pub fn flush_transport(&self) -> bool {
|
||||||
matches!(self, Self::Strict)
|
matches!(self, Self::Strict)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for FlushPolicy {
|
impl Default for FlushPolicy {
|
||||||
/// Default to `Strict` for compatibility with mote-tls 0.17.0.
|
|
||||||
fn default() -> Self {
|
fn default() -> Self {
|
||||||
FlushPolicy::Strict
|
FlushPolicy::Strict
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user