Session resumption involves a version check, so version negotiation must
happen first. Currently, the DTLS implementation cannot do session
resumption in DTLS 1.0 because the ssl_version check always checks
against 1.2.
Switching the order also removes the need to fixup ssl_version in DTLS
version negotiation.
Signed-off-by: Kurt Roeckx <kurt@roeckx.be>
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
RT: #4392, MR: #2452
We now send the highest supported version by the client, even if the session
uses an older version.
This fixes 2 problems:
- When you try to reuse a session but the other side doesn't reuse it and
uses a different protocol version the connection will fail.
- When you're trying to reuse a session with an old version you might be
stuck trying to reuse the old version while both sides support a newer
version
Signed-off-by: Kurt Roeckx <kurt@roeckx.be>
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
GH: #852, MR: #2452
BIO_new, etc., don't need a non-const BIO_METHOD. This allows all the
built-in method tables to live in .rodata.
Reviewed-by: Richard Levitte <levitte@openssl.org>
* Clear proposed, along with selected, before looking at ClientHello
* Add test case for above
* Clear NPN seen after selecting ALPN on server
* Minor documentation updates
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
Don't have #error statements in header files, but instead wrap
the contents of that file in #ifndef OPENSSL_NO_xxx
This means it is now always safe to include the header file.
Reviewed-by: Richard Levitte <levitte@openssl.org>
If a call to EVP_DecryptUpdate fails then a memory leak could occur.
Ensure that the memory is freed appropriately.
Issue reported by Guido Vranken.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Remove 'log' field from SCT and related accessors
In order to still have access to an SCT's CTLOG when calling SCT_print,
SSL_CTX_get0_ctlog_store has been added.
Improved documentation for some CT functions in openssl/ssl.h.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
Adjust ssl_set_client_hello_version to get both the minimum and maximum and then
make ssl_set_client_hello_version use the maximum version.
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
MR: #1595
Copy/paste error between SSL_CIPHER_get_kx_nid() and
SSL_CIPHER_get_auth_nid(), wrong table was referenced
Signed-off-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
* Perform ALPN after the SNI callback; the SSL_CTX may change due to
that processing
* Add flags to indicate that we actually sent ALPN, to properly error
out if unexpectedly received.
* clean up ssl3_free() no need to explicitly clear when doing memset
* document ALPN functions
Signed-off-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
CCA8, CCA9, CCAA, CCAB, CCAC, CCAD, and CCAE are now present in
https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml
so remove the "as per draft-ietf-tls-chacha20-poly1305-03" note
accordingly.
Signed-off-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
* Perform ALPN after the SNI callback; the SSL_CTX may change due to
that processing
* Add flags to indicate that we actually sent ALPN, to properly error
out if unexpectedly received.
* clean up ssl3_free() no need to explicitly clear when doing memset
* document ALPN functions
Signed-off-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Emilia Käsper <emilia@openssl.org>
The numpipes argument to ssl3_enc/tls1_enc is actually the number of
records passed in the array. To make this clearer rename the argument to
|n_recs|.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Rename the have_whole_app_data_record_waiting() function to include the
ssl3_record prefix...and make it a bit shorter.
Reviewed-by: Tim Hudson <tjh@openssl.org>
We used to use the wrec field in the record layer for keeping track of the
current record that we are writing out. As part of the pipelining changes
this has been moved to stack allocated variables to do the same thing,
therefore the field is no longer needed.
Reviewed-by: Tim Hudson <tjh@openssl.org>
This is similar to SSL_pending() but just returns a 1 if there is data
pending in the internal OpenSSL buffers or 0 otherwise (as opposed to
SSL_pending() which returns the number of bytes available). Unlike
SSL_pending() this will work even if "read_ahead" is set (which is the
case if you are using read pipelining, or if you are doing DTLS). A 1
return value means that we have unprocessed data. It does *not* necessarily
indicate that there will be application data returned from a call to
SSL_read(). The unprocessed data may not be application data or there
could be errors when we attempt to parse the records.
Reviewed-by: Tim Hudson <tjh@openssl.org>
This capability is required for read pipelining. We will only read in as
many records as will fit in the read buffer (and the network can provide
in one go). The bigger the buffer the more records we can process in
parallel.
Reviewed-by: Tim Hudson <tjh@openssl.org>
With read pipelining we use multiple SSL3_RECORD structures for reading.
There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
would be used). Each one has a 16k compression buffer allocated! This
results in a significant amount of memory being consumed which, most of the
time, is not needed. This change swaps the allocation of the compression
buffer to be lazy so that it is only done immediately before it is actually
used.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Read pipelining is controlled in a slightly different way than with write
pipelining. While reading we are constrained by the number of records that
the peer (and the network) can provide to us in one go. The more records
we can get in one go the more opportunity we have to parallelise the
processing.
There are two parameters that affect this:
* The number of pipelines that we are willing to process in one go. This is
controlled by max_pipelines (as for write pipelining)
* The size of our read buffer. A subsequent commit will provide an API for
adjusting the size of the buffer.
Another requirement for this to work is that "read_ahead" must be set. The
read_ahead parameter will attempt to read as much data into our read buffer
as the network can provide. Without this set, data is read into the read
buffer on demand. Setting the max_pipelines parameter to a value greater
than 1 will automatically also turn read_ahead on.
Finally, the read pipelining as currently implemented will only parallelise
the processing of application data records. This would only make a
difference for renegotiation so is unlikely to have a significant impact.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Use the new pipeline cipher capability to encrypt multiple records being
written out all in one go. Two new SSL/SSL_CTX parameters can be used to
control how this works: max_pipelines and split_send_fragment.
max_pipelines defines the maximum number of pipelines that can ever be used
in one go for a single connection. It must always be less than or equal to
SSL_MAX_PIPELINES (currently defined to be 32). By default only one
pipeline will be used (i.e. normal non-parallel operation).
split_send_fragment defines how data is split up into pipelines. The number
of pipelines used will be determined by the amount of data provided to the
SSL_write call divided by split_send_fragment. For example if
split_send_fragment is set to 2000 and max_pipelines is 4 then:
SSL_write called with 0-2000 bytes == 1 pipeline used
SSL_write called with 2001-4000 bytes == 2 pipelines used
SSL_write called with 4001-6000 bytes == 3 pipelines used
SSL_write_called with 6001+ bytes == 4 pipelines used
split_send_fragment must always be less than or equal to max_send_fragment.
By default it is set to be equal to max_send_fragment. This will mean that
the same number of records will always be created as would have been
created in the non-parallel case, although the data will be apportioned
differently. In the parallel case data will be spread equally between the
pipelines.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Without this, the peer certificate would never be deleted, resulting in
a memory leak.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>