您最多选择25个主题 主题必须以字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符

crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: make Conn.Read return (n, io.EOF) when EOF is next in buffer Update #3514 An io.Reader is permitted to return either (n, nil) or (n, io.EOF) on EOF or other error. The tls package previously always returned (n, nil) for a read of size n if n bytes were available, not surfacing errors at the same time. Amazon's HTTPS frontends like to hang up on clients without sending the appropriate HTTP headers. (In their defense, they're allowed to hang up any time, but generally a server hangs up after a bit of inactivity, not immediately.) In any case, the Go HTTP client tries to re-use connections by looking at whether the response headers say to keep the connection open, and because the connection looks okay, under heavy load it's possible we'll reuse it immediately, writing the next request, just as the Transport's always-reading goroutine returns from tls.Conn.Read and sees (0, io.EOF). But because Amazon does send an AlertCloseNotify record before it hangs up on us, and the tls package does its own internal buffering (up to 1024 bytes) of pending data, we have the AlertCloseNotify in an unread buffer when our Conn.Read (to the HTTP Transport code) reads its final bit of data in the HTTP response body. This change makes that final Read return (n, io.EOF) when an AlertCloseNotify record is buffered right after, if we'd otherwise return (n, nil). A dependent change in the HTTP code then notes whether a client connection has seen an io.EOF and uses that as an additional signal to not reuse a HTTPS connection. With both changes, the majority of Amazon request failures go away. Without either one, 10-20 goroutines hitting the S3 API leads to such an error rate that empirically up to 5 retries are needed to complete an API call. LGTM=agl, rsc R=agl, rsc CC=golang-codereviews https://golang.org/cl/76400046
10 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
crypto/tls: adjust dynamic record sizes to grow arithmetically The current code, introduced after Go 1.6 to improve latency on low-bandwidth connections, sends 1 kB packets until 1 MB has been sent, and then sends 16 kB packets (the maximum record size). Unfortunately this decreases throughput for 1-16 MB responses by 20% or so. Following discussion on #15713, change cutoff to 128 kB sent and also grow the size allowed for successive packets: 1 kB, 2 kB, 3 kB, ..., 15 kB, 16 kB. This fixes the throughput problems: the overhead is now closer to 2%. I hope this still helps with latency but I don't have a great way to test it. At the least, it's not worse than Go 1.6. Comparing MaxPacket vs DynamicPacket benchmarks: name maxpkt time/op dyn. time/op delta Throughput/1MB-8 5.07ms ± 7% 5.21ms ± 7% +2.73% (p=0.023 n=16+16) Throughput/2MB-8 15.7ms ±201% 8.4ms ± 5% ~ (p=0.604 n=20+16) Throughput/4MB-8 14.3ms ± 1% 14.5ms ± 1% +1.53% (p=0.000 n=16+16) Throughput/8MB-8 26.6ms ± 1% 26.8ms ± 1% +0.47% (p=0.003 n=19+18) Throughput/16MB-8 51.0ms ± 1% 51.3ms ± 1% +0.47% (p=0.000 n=20+20) Throughput/32MB-8 100ms ± 1% 100ms ± 1% +0.24% (p=0.033 n=20+20) Throughput/64MB-8 197ms ± 0% 198ms ± 0% +0.56% (p=0.000 n=18+7) The small MB runs are bimodal in both cases, probably GC pauses. But there's clearly no general slowdown anymore. Fixes #15713. Change-Id: I5fc44680ba71812d24baac142bceee0e23f2e382 Reviewed-on: https://go-review.googlesource.com/23487 Reviewed-by: Ian Lance Taylor <iant@golang.org>
8 年前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811
  1. // Copyright 2012 The Go Authors. All rights reserved.
  2. // Use of this source code is governed by a BSD-style
  3. // license that can be found in the LICENSE file.
  4. package tls
  5. import (
  6. "bytes"
  7. "crypto/x509"
  8. "errors"
  9. "fmt"
  10. "internal/testenv"
  11. "io"
  12. "io/ioutil"
  13. "math"
  14. "math/rand"
  15. "net"
  16. "os"
  17. "reflect"
  18. "strings"
  19. "testing"
  20. "testing/quick"
  21. "time"
  22. )
  23. var rsaCertPEM = `-----BEGIN CERTIFICATE-----
  24. MIIB0zCCAX2gAwIBAgIJAI/M7BYjwB+uMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV
  25. BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
  26. aWRnaXRzIFB0eSBMdGQwHhcNMTIwOTEyMjE1MjAyWhcNMTUwOTEyMjE1MjAyWjBF
  27. MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50
  28. ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANLJ
  29. hPHhITqQbPklG3ibCVxwGMRfp/v4XqhfdQHdcVfHap6NQ5Wok/4xIA+ui35/MmNa
  30. rtNuC+BdZ1tMuVCPFZcCAwEAAaNQME4wHQYDVR0OBBYEFJvKs8RfJaXTH08W+SGv
  31. zQyKn0H8MB8GA1UdIwQYMBaAFJvKs8RfJaXTH08W+SGvzQyKn0H8MAwGA1UdEwQF
  32. MAMBAf8wDQYJKoZIhvcNAQEFBQADQQBJlffJHybjDGxRMqaRmDhX0+6v02TUKZsW
  33. r5QuVbpQhH6u+0UgcW0jp9QwpxoPTLTWGXEWBBBurxFwiCBhkQ+V
  34. -----END CERTIFICATE-----
  35. `
  36. var rsaKeyPEM = `-----BEGIN RSA PRIVATE KEY-----
  37. MIIBOwIBAAJBANLJhPHhITqQbPklG3ibCVxwGMRfp/v4XqhfdQHdcVfHap6NQ5Wo
  38. k/4xIA+ui35/MmNartNuC+BdZ1tMuVCPFZcCAwEAAQJAEJ2N+zsR0Xn8/Q6twa4G
  39. 6OB1M1WO+k+ztnX/1SvNeWu8D6GImtupLTYgjZcHufykj09jiHmjHx8u8ZZB/o1N
  40. MQIhAPW+eyZo7ay3lMz1V01WVjNKK9QSn1MJlb06h/LuYv9FAiEA25WPedKgVyCW
  41. SmUwbPw8fnTcpqDWE3yTO3vKcebqMSsCIBF3UmVue8YU3jybC3NxuXq3wNm34R8T
  42. xVLHwDXh/6NJAiEAl2oHGGLz64BuAfjKrqwz7qMYr9HCLIe/YsoWq/olzScCIQDi
  43. D2lWusoe2/nEqfDVVWGWlyJ7yOmqaVm/iNUN9B2N2g==
  44. -----END RSA PRIVATE KEY-----
  45. `
  46. // keyPEM is the same as rsaKeyPEM, but declares itself as just
  47. // "PRIVATE KEY", not "RSA PRIVATE KEY". https://golang.org/issue/4477
  48. var keyPEM = `-----BEGIN PRIVATE KEY-----
  49. MIIBOwIBAAJBANLJhPHhITqQbPklG3ibCVxwGMRfp/v4XqhfdQHdcVfHap6NQ5Wo
  50. k/4xIA+ui35/MmNartNuC+BdZ1tMuVCPFZcCAwEAAQJAEJ2N+zsR0Xn8/Q6twa4G
  51. 6OB1M1WO+k+ztnX/1SvNeWu8D6GImtupLTYgjZcHufykj09jiHmjHx8u8ZZB/o1N
  52. MQIhAPW+eyZo7ay3lMz1V01WVjNKK9QSn1MJlb06h/LuYv9FAiEA25WPedKgVyCW
  53. SmUwbPw8fnTcpqDWE3yTO3vKcebqMSsCIBF3UmVue8YU3jybC3NxuXq3wNm34R8T
  54. xVLHwDXh/6NJAiEAl2oHGGLz64BuAfjKrqwz7qMYr9HCLIe/YsoWq/olzScCIQDi
  55. D2lWusoe2/nEqfDVVWGWlyJ7yOmqaVm/iNUN9B2N2g==
  56. -----END PRIVATE KEY-----
  57. `
  58. var ecdsaCertPEM = `-----BEGIN CERTIFICATE-----
  59. MIIB/jCCAWICCQDscdUxw16XFDAJBgcqhkjOPQQBMEUxCzAJBgNVBAYTAkFVMRMw
  60. EQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBXaWRnaXRzIFB0
  61. eSBMdGQwHhcNMTIxMTE0MTI0MDQ4WhcNMTUxMTE0MTI0MDQ4WjBFMQswCQYDVQQG
  62. EwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lk
  63. Z2l0cyBQdHkgTHRkMIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQBY9+my9OoeSUR
  64. lDQdV/x8LsOuLilthhiS1Tz4aGDHIPwC1mlvnf7fg5lecYpMCrLLhauAc1UJXcgl
  65. 01xoLuzgtAEAgv2P/jgytzRSpUYvgLBt1UA0leLYBy6mQQbrNEuqT3INapKIcUv8
  66. XxYP0xMEUksLPq6Ca+CRSqTtrd/23uTnapkwCQYHKoZIzj0EAQOBigAwgYYCQXJo
  67. A7Sl2nLVf+4Iu/tAX/IF4MavARKC4PPHK3zfuGfPR3oCCcsAoz3kAzOeijvd0iXb
  68. H5jBImIxPL4WxQNiBTexAkF8D1EtpYuWdlVQ80/h/f4pBcGiXPqX5h2PQSQY7hP1
  69. +jwM1FGS4fREIOvlBYr/SzzQRtwrvrzGYxDEDbsC0ZGRnA==
  70. -----END CERTIFICATE-----
  71. `
  72. var ecdsaKeyPEM = `-----BEGIN EC PARAMETERS-----
  73. BgUrgQQAIw==
  74. -----END EC PARAMETERS-----
  75. -----BEGIN EC PRIVATE KEY-----
  76. MIHcAgEBBEIBrsoKp0oqcv6/JovJJDoDVSGWdirrkgCWxrprGlzB9o0X8fV675X0
  77. NwuBenXFfeZvVcwluO7/Q9wkYoPd/t3jGImgBwYFK4EEACOhgYkDgYYABAFj36bL
  78. 06h5JRGUNB1X/Hwuw64uKW2GGJLVPPhoYMcg/ALWaW+d/t+DmV5xikwKssuFq4Bz
  79. VQldyCXTXGgu7OC0AQCC/Y/+ODK3NFKlRi+AsG3VQDSV4tgHLqZBBus0S6pPcg1q
  80. kohxS/xfFg/TEwRSSws+roJr4JFKpO2t3/be5OdqmQ==
  81. -----END EC PRIVATE KEY-----
  82. `
  83. var keyPairTests = []struct {
  84. algo string
  85. cert string
  86. key string
  87. }{
  88. {"ECDSA", ecdsaCertPEM, ecdsaKeyPEM},
  89. {"RSA", rsaCertPEM, rsaKeyPEM},
  90. {"RSA-untyped", rsaCertPEM, keyPEM}, // golang.org/issue/4477
  91. }
  92. func TestX509KeyPair(t *testing.T) {
  93. t.Parallel()
  94. var pem []byte
  95. for _, test := range keyPairTests {
  96. pem = []byte(test.cert + test.key)
  97. if _, err := X509KeyPair(pem, pem); err != nil {
  98. t.Errorf("Failed to load %s cert followed by %s key: %s", test.algo, test.algo, err)
  99. }
  100. pem = []byte(test.key + test.cert)
  101. if _, err := X509KeyPair(pem, pem); err != nil {
  102. t.Errorf("Failed to load %s key followed by %s cert: %s", test.algo, test.algo, err)
  103. }
  104. }
  105. }
  106. func TestX509KeyPairErrors(t *testing.T) {
  107. _, err := X509KeyPair([]byte(rsaKeyPEM), []byte(rsaCertPEM))
  108. if err == nil {
  109. t.Fatalf("X509KeyPair didn't return an error when arguments were switched")
  110. }
  111. if subStr := "been switched"; !strings.Contains(err.Error(), subStr) {
  112. t.Fatalf("Expected %q in the error when switching arguments to X509KeyPair, but the error was %q", subStr, err)
  113. }
  114. _, err = X509KeyPair([]byte(rsaCertPEM), []byte(rsaCertPEM))
  115. if err == nil {
  116. t.Fatalf("X509KeyPair didn't return an error when both arguments were certificates")
  117. }
  118. if subStr := "certificate"; !strings.Contains(err.Error(), subStr) {
  119. t.Fatalf("Expected %q in the error when both arguments to X509KeyPair were certificates, but the error was %q", subStr, err)
  120. }
  121. const nonsensePEM = `
  122. -----BEGIN NONSENSE-----
  123. Zm9vZm9vZm9v
  124. -----END NONSENSE-----
  125. `
  126. _, err = X509KeyPair([]byte(nonsensePEM), []byte(nonsensePEM))
  127. if err == nil {
  128. t.Fatalf("X509KeyPair didn't return an error when both arguments were nonsense")
  129. }
  130. if subStr := "NONSENSE"; !strings.Contains(err.Error(), subStr) {
  131. t.Fatalf("Expected %q in the error when both arguments to X509KeyPair were nonsense, but the error was %q", subStr, err)
  132. }
  133. }
  134. func TestX509MixedKeyPair(t *testing.T) {
  135. if _, err := X509KeyPair([]byte(rsaCertPEM), []byte(ecdsaKeyPEM)); err == nil {
  136. t.Error("Load of RSA certificate succeeded with ECDSA private key")
  137. }
  138. if _, err := X509KeyPair([]byte(ecdsaCertPEM), []byte(rsaKeyPEM)); err == nil {
  139. t.Error("Load of ECDSA certificate succeeded with RSA private key")
  140. }
  141. }
  142. func newLocalListener(t testing.TB) net.Listener {
  143. ln, err := net.Listen("tcp", "127.0.0.1:0")
  144. if err != nil {
  145. ln, err = net.Listen("tcp6", "[::1]:0")
  146. }
  147. if err != nil {
  148. t.Fatal(err)
  149. }
  150. return ln
  151. }
  152. func TestDialTimeout(t *testing.T) {
  153. if testing.Short() {
  154. t.Skip("skipping in short mode")
  155. }
  156. listener := newLocalListener(t)
  157. addr := listener.Addr().String()
  158. defer listener.Close()
  159. complete := make(chan bool)
  160. defer close(complete)
  161. go func() {
  162. conn, err := listener.Accept()
  163. if err != nil {
  164. t.Error(err)
  165. return
  166. }
  167. <-complete
  168. conn.Close()
  169. }()
  170. dialer := &net.Dialer{
  171. Timeout: 10 * time.Millisecond,
  172. }
  173. var err error
  174. if _, err = DialWithDialer(dialer, "tcp", addr, nil); err == nil {
  175. t.Fatal("DialWithTimeout completed successfully")
  176. }
  177. if !isTimeoutError(err) {
  178. t.Errorf("resulting error not a timeout: %v\nType %T: %#v", err, err, err)
  179. }
  180. }
  181. func isTimeoutError(err error) bool {
  182. if ne, ok := err.(net.Error); ok {
  183. return ne.Timeout()
  184. }
  185. return false
  186. }
  187. // tests that Conn.Read returns (non-zero, io.EOF) instead of
  188. // (non-zero, nil) when a Close (alertCloseNotify) is sitting right
  189. // behind the application data in the buffer.
  190. func TestConnReadNonzeroAndEOF(t *testing.T) {
  191. // This test is racy: it assumes that after a write to a
  192. // localhost TCP connection, the peer TCP connection can
  193. // immediately read it. Because it's racy, we skip this test
  194. // in short mode, and then retry it several times with an
  195. // increasing sleep in between our final write (via srv.Close
  196. // below) and the following read.
  197. if testing.Short() {
  198. t.Skip("skipping in short mode")
  199. }
  200. var err error
  201. for delay := time.Millisecond; delay <= 64*time.Millisecond; delay *= 2 {
  202. if err = testConnReadNonzeroAndEOF(t, delay); err == nil {
  203. return
  204. }
  205. }
  206. t.Error(err)
  207. }
  208. func testConnReadNonzeroAndEOF(t *testing.T, delay time.Duration) error {
  209. ln := newLocalListener(t)
  210. defer ln.Close()
  211. srvCh := make(chan *Conn, 1)
  212. var serr error
  213. go func() {
  214. sconn, err := ln.Accept()
  215. if err != nil {
  216. serr = err
  217. srvCh <- nil
  218. return
  219. }
  220. serverConfig := testConfig.Clone()
  221. srv := Server(sconn, serverConfig)
  222. if err := srv.Handshake(); err != nil {
  223. serr = fmt.Errorf("handshake: %v", err)
  224. srvCh <- nil
  225. return
  226. }
  227. srvCh <- srv
  228. }()
  229. clientConfig := testConfig.Clone()
  230. conn, err := Dial("tcp", ln.Addr().String(), clientConfig)
  231. if err != nil {
  232. t.Fatal(err)
  233. }
  234. defer conn.Close()
  235. srv := <-srvCh
  236. if srv == nil {
  237. return serr
  238. }
  239. buf := make([]byte, 6)
  240. srv.Write([]byte("foobar"))
  241. n, err := conn.Read(buf)
  242. if n != 6 || err != nil || string(buf) != "foobar" {
  243. return fmt.Errorf("Read = %d, %v, data %q; want 6, nil, foobar", n, err, buf)
  244. }
  245. srv.Write([]byte("abcdef"))
  246. srv.Close()
  247. time.Sleep(delay)
  248. n, err = conn.Read(buf)
  249. if n != 6 || string(buf) != "abcdef" {
  250. return fmt.Errorf("Read = %d, buf= %q; want 6, abcdef", n, buf)
  251. }
  252. if err != io.EOF {
  253. return fmt.Errorf("Second Read error = %v; want io.EOF", err)
  254. }
  255. return nil
  256. }
  257. func TestTLSUniqueMatches(t *testing.T) {
  258. ln := newLocalListener(t)
  259. defer ln.Close()
  260. serverTLSUniques := make(chan []byte)
  261. go func() {
  262. for i := 0; i < 2; i++ {
  263. sconn, err := ln.Accept()
  264. if err != nil {
  265. t.Error(err)
  266. return
  267. }
  268. serverConfig := testConfig.Clone()
  269. srv := Server(sconn, serverConfig)
  270. if err := srv.Handshake(); err != nil {
  271. t.Error(err)
  272. return
  273. }
  274. serverTLSUniques <- srv.ConnectionState().TLSUnique
  275. }
  276. }()
  277. clientConfig := testConfig.Clone()
  278. clientConfig.ClientSessionCache = NewLRUClientSessionCache(1)
  279. conn, err := Dial("tcp", ln.Addr().String(), clientConfig)
  280. if err != nil {
  281. t.Fatal(err)
  282. }
  283. if !bytes.Equal(conn.ConnectionState().TLSUnique, <-serverTLSUniques) {
  284. t.Error("client and server channel bindings differ")
  285. }
  286. conn.Close()
  287. conn, err = Dial("tcp", ln.Addr().String(), clientConfig)
  288. if err != nil {
  289. t.Fatal(err)
  290. }
  291. defer conn.Close()
  292. if !conn.ConnectionState().DidResume {
  293. t.Error("second session did not use resumption")
  294. }
  295. if !bytes.Equal(conn.ConnectionState().TLSUnique, <-serverTLSUniques) {
  296. t.Error("client and server channel bindings differ when session resumption is used")
  297. }
  298. }
  299. func TestVerifyHostname(t *testing.T) {
  300. testenv.MustHaveExternalNetwork(t)
  301. c, err := Dial("tcp", "www.google.com:https", nil)
  302. if err != nil {
  303. t.Fatal(err)
  304. }
  305. if err := c.VerifyHostname("www.google.com"); err != nil {
  306. t.Fatalf("verify www.google.com: %v", err)
  307. }
  308. if err := c.VerifyHostname("www.yahoo.com"); err == nil {
  309. t.Fatalf("verify www.yahoo.com succeeded")
  310. }
  311. c, err = Dial("tcp", "www.google.com:https", &Config{InsecureSkipVerify: true})
  312. if err != nil {
  313. t.Fatal(err)
  314. }
  315. if err := c.VerifyHostname("www.google.com"); err == nil {
  316. t.Fatalf("verify www.google.com succeeded with InsecureSkipVerify=true")
  317. }
  318. if err := c.VerifyHostname("www.yahoo.com"); err == nil {
  319. t.Fatalf("verify www.google.com succeeded with InsecureSkipVerify=true")
  320. }
  321. }
  322. func TestVerifyHostnameResumed(t *testing.T) {
  323. testenv.MustHaveExternalNetwork(t)
  324. config := &Config{
  325. ClientSessionCache: NewLRUClientSessionCache(32),
  326. }
  327. for i := 0; i < 2; i++ {
  328. c, err := Dial("tcp", "www.google.com:https", config)
  329. if err != nil {
  330. t.Fatalf("Dial #%d: %v", i, err)
  331. }
  332. cs := c.ConnectionState()
  333. if i > 0 && !cs.DidResume {
  334. t.Fatalf("Subsequent connection unexpectedly didn't resume")
  335. }
  336. if cs.VerifiedChains == nil {
  337. t.Fatalf("Dial #%d: cs.VerifiedChains == nil", i)
  338. }
  339. if err := c.VerifyHostname("www.google.com"); err != nil {
  340. t.Fatalf("verify www.google.com #%d: %v", i, err)
  341. }
  342. c.Close()
  343. }
  344. }
  345. func TestConnCloseBreakingWrite(t *testing.T) {
  346. ln := newLocalListener(t)
  347. defer ln.Close()
  348. srvCh := make(chan *Conn, 1)
  349. var serr error
  350. var sconn net.Conn
  351. go func() {
  352. var err error
  353. sconn, err = ln.Accept()
  354. if err != nil {
  355. serr = err
  356. srvCh <- nil
  357. return
  358. }
  359. serverConfig := testConfig.Clone()
  360. srv := Server(sconn, serverConfig)
  361. if err := srv.Handshake(); err != nil {
  362. serr = fmt.Errorf("handshake: %v", err)
  363. srvCh <- nil
  364. return
  365. }
  366. srvCh <- srv
  367. }()
  368. cconn, err := net.Dial("tcp", ln.Addr().String())
  369. if err != nil {
  370. t.Fatal(err)
  371. }
  372. defer cconn.Close()
  373. conn := &changeImplConn{
  374. Conn: cconn,
  375. }
  376. clientConfig := testConfig.Clone()
  377. tconn := Client(conn, clientConfig)
  378. if err := tconn.Handshake(); err != nil {
  379. t.Fatal(err)
  380. }
  381. srv := <-srvCh
  382. if srv == nil {
  383. t.Fatal(serr)
  384. }
  385. defer sconn.Close()
  386. connClosed := make(chan struct{})
  387. conn.closeFunc = func() error {
  388. close(connClosed)
  389. return nil
  390. }
  391. inWrite := make(chan bool, 1)
  392. var errConnClosed = errors.New("conn closed for test")
  393. conn.writeFunc = func(p []byte) (n int, err error) {
  394. inWrite <- true
  395. <-connClosed
  396. return 0, errConnClosed
  397. }
  398. closeReturned := make(chan bool, 1)
  399. go func() {
  400. <-inWrite
  401. tconn.Close() // test that this doesn't block forever.
  402. closeReturned <- true
  403. }()
  404. _, err = tconn.Write([]byte("foo"))
  405. if err != errConnClosed {
  406. t.Errorf("Write error = %v; want errConnClosed", err)
  407. }
  408. <-closeReturned
  409. if err := tconn.Close(); err != errClosed {
  410. t.Errorf("Close error = %v; want errClosed", err)
  411. }
  412. }
  413. func TestConnCloseWrite(t *testing.T) {
  414. ln := newLocalListener(t)
  415. defer ln.Close()
  416. clientDoneChan := make(chan struct{})
  417. serverCloseWrite := func() error {
  418. sconn, err := ln.Accept()
  419. if err != nil {
  420. return fmt.Errorf("accept: %v", err)
  421. }
  422. defer sconn.Close()
  423. serverConfig := testConfig.Clone()
  424. srv := Server(sconn, serverConfig)
  425. if err := srv.Handshake(); err != nil {
  426. return fmt.Errorf("handshake: %v", err)
  427. }
  428. defer srv.Close()
  429. data, err := ioutil.ReadAll(srv)
  430. if err != nil {
  431. return err
  432. }
  433. if len(data) > 0 {
  434. return fmt.Errorf("Read data = %q; want nothing", data)
  435. }
  436. if err := srv.CloseWrite(); err != nil {
  437. return fmt.Errorf("server CloseWrite: %v", err)
  438. }
  439. // Wait for clientCloseWrite to finish, so we know we
  440. // tested the CloseWrite before we defer the
  441. // sconn.Close above, which would also cause the
  442. // client to unblock like CloseWrite.
  443. <-clientDoneChan
  444. return nil
  445. }
  446. clientCloseWrite := func() error {
  447. defer close(clientDoneChan)
  448. clientConfig := testConfig.Clone()
  449. conn, err := Dial("tcp", ln.Addr().String(), clientConfig)
  450. if err != nil {
  451. return err
  452. }
  453. if err := conn.Handshake(); err != nil {
  454. return err
  455. }
  456. defer conn.Close()
  457. if err := conn.CloseWrite(); err != nil {
  458. return fmt.Errorf("client CloseWrite: %v", err)
  459. }
  460. if _, err := conn.Write([]byte{0}); err != errShutdown {
  461. return fmt.Errorf("CloseWrite error = %v; want errShutdown", err)
  462. }
  463. data, err := ioutil.ReadAll(conn)
  464. if err != nil {
  465. return err
  466. }
  467. if len(data) > 0 {
  468. return fmt.Errorf("Read data = %q; want nothing", data)
  469. }
  470. return nil
  471. }
  472. errChan := make(chan error, 2)
  473. go func() { errChan <- serverCloseWrite() }()
  474. go func() { errChan <- clientCloseWrite() }()
  475. for i := 0; i < 2; i++ {
  476. select {
  477. case err := <-errChan:
  478. if err != nil {
  479. t.Fatal(err)
  480. }
  481. case <-time.After(10 * time.Second):
  482. t.Fatal("deadlock")
  483. }
  484. }
  485. // Also test CloseWrite being called before the handshake is
  486. // finished:
  487. {
  488. ln2 := newLocalListener(t)
  489. defer ln2.Close()
  490. netConn, err := net.Dial("tcp", ln2.Addr().String())
  491. if err != nil {
  492. t.Fatal(err)
  493. }
  494. defer netConn.Close()
  495. conn := Client(netConn, testConfig.Clone())
  496. if err := conn.CloseWrite(); err != errEarlyCloseWrite {
  497. t.Errorf("CloseWrite error = %v; want errEarlyCloseWrite", err)
  498. }
  499. }
  500. }
  501. func TestClone(t *testing.T) {
  502. var c1 Config
  503. v := reflect.ValueOf(&c1).Elem()
  504. rnd := rand.New(rand.NewSource(time.Now().Unix()))
  505. typ := v.Type()
  506. for i := 0; i < typ.NumField(); i++ {
  507. f := v.Field(i)
  508. if !f.CanSet() {
  509. // unexported field; not cloned.
  510. continue
  511. }
  512. // testing/quick can't handle functions or interfaces.
  513. fn := typ.Field(i).Name
  514. switch fn {
  515. case "Rand":
  516. f.Set(reflect.ValueOf(io.Reader(os.Stdin)))
  517. continue
  518. case "Time", "GetCertificate", "GetConfigForClient", "VerifyPeerCertificate", "GetClientCertificate":
  519. // DeepEqual can't compare functions.
  520. continue
  521. case "Certificates":
  522. f.Set(reflect.ValueOf([]Certificate{
  523. {Certificate: [][]byte{{'b'}}},
  524. }))
  525. continue
  526. case "NameToCertificate":
  527. f.Set(reflect.ValueOf(map[string]*Certificate{"a": nil}))
  528. continue
  529. case "RootCAs", "ClientCAs":
  530. f.Set(reflect.ValueOf(x509.NewCertPool()))
  531. continue
  532. case "ClientSessionCache":
  533. f.Set(reflect.ValueOf(NewLRUClientSessionCache(10)))
  534. continue
  535. case "KeyLogWriter":
  536. f.Set(reflect.ValueOf(io.Writer(os.Stdout)))
  537. continue
  538. }
  539. q, ok := quick.Value(f.Type(), rnd)
  540. if !ok {
  541. t.Fatalf("quick.Value failed on field %s", fn)
  542. }
  543. f.Set(q)
  544. }
  545. c2 := c1.Clone()
  546. // DeepEqual also compares unexported fields, thus c2 needs to have run
  547. // serverInit in order to be DeepEqual to c1. Cloning it and discarding
  548. // the result is sufficient.
  549. c2.Clone()
  550. if !reflect.DeepEqual(&c1, c2) {
  551. t.Errorf("clone failed to copy a field")
  552. }
  553. }
  554. // changeImplConn is a net.Conn which can change its Write and Close
  555. // methods.
  556. type changeImplConn struct {
  557. net.Conn
  558. writeFunc func([]byte) (int, error)
  559. closeFunc func() error
  560. }
  561. func (w *changeImplConn) Write(p []byte) (n int, err error) {
  562. if w.writeFunc != nil {
  563. return w.writeFunc(p)
  564. }
  565. return w.Conn.Write(p)
  566. }
  567. func (w *changeImplConn) Close() error {
  568. if w.closeFunc != nil {
  569. return w.closeFunc()
  570. }
  571. return w.Conn.Close()
  572. }
  573. func throughput(b *testing.B, totalBytes int64, dynamicRecordSizingDisabled bool) {
  574. ln := newLocalListener(b)
  575. defer ln.Close()
  576. N := b.N
  577. // Less than 64KB because Windows appears to use a TCP rwin < 64KB.
  578. // See Issue #15899.
  579. const bufsize = 32 << 10
  580. go func() {
  581. buf := make([]byte, bufsize)
  582. for i := 0; i < N; i++ {
  583. sconn, err := ln.Accept()
  584. if err != nil {
  585. // panic rather than synchronize to avoid benchmark overhead
  586. // (cannot call b.Fatal in goroutine)
  587. panic(fmt.Errorf("accept: %v", err))
  588. }
  589. serverConfig := testConfig.Clone()
  590. serverConfig.CipherSuites = nil // the defaults may prefer faster ciphers
  591. serverConfig.DynamicRecordSizingDisabled = dynamicRecordSizingDisabled
  592. srv := Server(sconn, serverConfig)
  593. if err := srv.Handshake(); err != nil {
  594. panic(fmt.Errorf("handshake: %v", err))
  595. }
  596. if _, err := io.CopyBuffer(srv, srv, buf); err != nil {
  597. panic(fmt.Errorf("copy buffer: %v", err))
  598. }
  599. }
  600. }()
  601. b.SetBytes(totalBytes)
  602. clientConfig := testConfig.Clone()
  603. clientConfig.CipherSuites = nil // the defaults may prefer faster ciphers
  604. clientConfig.DynamicRecordSizingDisabled = dynamicRecordSizingDisabled
  605. buf := make([]byte, bufsize)
  606. chunks := int(math.Ceil(float64(totalBytes) / float64(len(buf))))
  607. for i := 0; i < N; i++ {
  608. conn, err := Dial("tcp", ln.Addr().String(), clientConfig)
  609. if err != nil {
  610. b.Fatal(err)
  611. }
  612. for j := 0; j < chunks; j++ {
  613. _, err := conn.Write(buf)
  614. if err != nil {
  615. b.Fatal(err)
  616. }
  617. _, err = io.ReadFull(conn, buf)
  618. if err != nil {
  619. b.Fatal(err)
  620. }
  621. }
  622. conn.Close()
  623. }
  624. }
  625. func BenchmarkThroughput(b *testing.B) {
  626. for _, mode := range []string{"Max", "Dynamic"} {
  627. for size := 1; size <= 64; size <<= 1 {
  628. name := fmt.Sprintf("%sPacket/%dMB", mode, size)
  629. b.Run(name, func(b *testing.B) {
  630. throughput(b, int64(size<<20), mode == "Max")
  631. })
  632. }
  633. }
  634. }
  635. type slowConn struct {
  636. net.Conn
  637. bps int
  638. }
  639. func (c *slowConn) Write(p []byte) (int, error) {
  640. if c.bps == 0 {
  641. panic("too slow")
  642. }
  643. t0 := time.Now()
  644. wrote := 0
  645. for wrote < len(p) {
  646. time.Sleep(100 * time.Microsecond)
  647. allowed := int(time.Since(t0).Seconds()*float64(c.bps)) / 8
  648. if allowed > len(p) {
  649. allowed = len(p)
  650. }
  651. if wrote < allowed {
  652. n, err := c.Conn.Write(p[wrote:allowed])
  653. wrote += n
  654. if err != nil {
  655. return wrote, err
  656. }
  657. }
  658. }
  659. return len(p), nil
  660. }
  661. func latency(b *testing.B, bps int, dynamicRecordSizingDisabled bool) {
  662. ln := newLocalListener(b)
  663. defer ln.Close()
  664. N := b.N
  665. go func() {
  666. for i := 0; i < N; i++ {
  667. sconn, err := ln.Accept()
  668. if err != nil {
  669. // panic rather than synchronize to avoid benchmark overhead
  670. // (cannot call b.Fatal in goroutine)
  671. panic(fmt.Errorf("accept: %v", err))
  672. }
  673. serverConfig := testConfig.Clone()
  674. serverConfig.DynamicRecordSizingDisabled = dynamicRecordSizingDisabled
  675. srv := Server(&slowConn{sconn, bps}, serverConfig)
  676. if err := srv.Handshake(); err != nil {
  677. panic(fmt.Errorf("handshake: %v", err))
  678. }
  679. io.Copy(srv, srv)
  680. }
  681. }()
  682. clientConfig := testConfig.Clone()
  683. clientConfig.DynamicRecordSizingDisabled = dynamicRecordSizingDisabled
  684. buf := make([]byte, 16384)
  685. peek := make([]byte, 1)
  686. for i := 0; i < N; i++ {
  687. conn, err := Dial("tcp", ln.Addr().String(), clientConfig)
  688. if err != nil {
  689. b.Fatal(err)
  690. }
  691. // make sure we're connected and previous connection has stopped
  692. if _, err := conn.Write(buf[:1]); err != nil {
  693. b.Fatal(err)
  694. }
  695. if _, err := io.ReadFull(conn, peek); err != nil {
  696. b.Fatal(err)
  697. }
  698. if _, err := conn.Write(buf); err != nil {
  699. b.Fatal(err)
  700. }
  701. if _, err = io.ReadFull(conn, peek); err != nil {
  702. b.Fatal(err)
  703. }
  704. conn.Close()
  705. }
  706. }
  707. func BenchmarkLatency(b *testing.B) {
  708. for _, mode := range []string{"Max", "Dynamic"} {
  709. for _, kbps := range []int{200, 500, 1000, 2000, 5000} {
  710. name := fmt.Sprintf("%sPacket/%dkbps", mode, kbps)
  711. b.Run(name, func(b *testing.B) {
  712. latency(b, kbps*1000, mode == "Max")
  713. })
  714. }
  715. }
  716. }