The numpipes argument to ssl3_enc/tls1_enc is actually the number of
records passed in the array. To make this clearer rename the argument to
|n_recs|.
Reviewed-by: Tim Hudson <tjh@openssl.org>
We used to use the wrec field in the record layer for keeping track of the
current record that we are writing out. As part of the pipelining changes
this has been moved to stack allocated variables to do the same thing,
therefore the field is no longer needed.
Reviewed-by: Tim Hudson <tjh@openssl.org>
This is similar to SSL_pending() but just returns a 1 if there is data
pending in the internal OpenSSL buffers or 0 otherwise (as opposed to
SSL_pending() which returns the number of bytes available). Unlike
SSL_pending() this will work even if "read_ahead" is set (which is the
case if you are using read pipelining, or if you are doing DTLS). A 1
return value means that we have unprocessed data. It does *not* necessarily
indicate that there will be application data returned from a call to
SSL_read(). The unprocessed data may not be application data or there
could be errors when we attempt to parse the records.
Reviewed-by: Tim Hudson <tjh@openssl.org>
This capability is required for read pipelining. We will only read in as
many records as will fit in the read buffer (and the network can provide
in one go). The bigger the buffer the more records we can process in
parallel.
Reviewed-by: Tim Hudson <tjh@openssl.org>
With read pipelining we use multiple SSL3_RECORD structures for reading.
There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
would be used). Each one has a 16k compression buffer allocated! This
results in a significant amount of memory being consumed which, most of the
time, is not needed. This change swaps the allocation of the compression
buffer to be lazy so that it is only done immediately before it is actually
used.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Read pipelining is controlled in a slightly different way than with write
pipelining. While reading we are constrained by the number of records that
the peer (and the network) can provide to us in one go. The more records
we can get in one go the more opportunity we have to parallelise the
processing.
There are two parameters that affect this:
* The number of pipelines that we are willing to process in one go. This is
controlled by max_pipelines (as for write pipelining)
* The size of our read buffer. A subsequent commit will provide an API for
adjusting the size of the buffer.
Another requirement for this to work is that "read_ahead" must be set. The
read_ahead parameter will attempt to read as much data into our read buffer
as the network can provide. Without this set, data is read into the read
buffer on demand. Setting the max_pipelines parameter to a value greater
than 1 will automatically also turn read_ahead on.
Finally, the read pipelining as currently implemented will only parallelise
the processing of application data records. This would only make a
difference for renegotiation so is unlikely to have a significant impact.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Use the new pipeline cipher capability to encrypt multiple records being
written out all in one go. Two new SSL/SSL_CTX parameters can be used to
control how this works: max_pipelines and split_send_fragment.
max_pipelines defines the maximum number of pipelines that can ever be used
in one go for a single connection. It must always be less than or equal to
SSL_MAX_PIPELINES (currently defined to be 32). By default only one
pipeline will be used (i.e. normal non-parallel operation).
split_send_fragment defines how data is split up into pipelines. The number
of pipelines used will be determined by the amount of data provided to the
SSL_write call divided by split_send_fragment. For example if
split_send_fragment is set to 2000 and max_pipelines is 4 then:
SSL_write called with 0-2000 bytes == 1 pipeline used
SSL_write called with 2001-4000 bytes == 2 pipelines used
SSL_write called with 4001-6000 bytes == 3 pipelines used
SSL_write_called with 6001+ bytes == 4 pipelines used
split_send_fragment must always be less than or equal to max_send_fragment.
By default it is set to be equal to max_send_fragment. This will mean that
the same number of records will always be created as would have been
created in the non-parallel case, although the data will be apportioned
differently. In the parallel case data will be spread equally between the
pipelines.
Reviewed-by: Tim Hudson <tjh@openssl.org>
This was done by the following
find . -name '*.[ch]' | /tmp/pl
where /tmp/pl is the following three-line script:
print unless $. == 1 && m@/\* .*\.[ch] \*/@;
close ARGV if eof; # Close file to reset $.
And then some hand-editing of other files.
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
This is an internal facility, never documented, not for
public consumption. Move it into ssl (where it's only used
for DTLS).
I also made the typedef's for pqueue and pitem follow our style: they
name structures, not pointers.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Also tweak some of the code in demos/bio, to enable interactive
testing of BIO_s_accept's use of SSL_dup. Changed the sconnect
client to authenticate the server, which now exercises the new
SSL_set1_host() function.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The existing implementation of DTLSv1_listen() is fundamentally flawed. This
function is used in DTLS solutions to listen for new incoming connections
from DTLS clients. A client will send an initial ClientHello. The server
will respond with a HelloVerifyRequest containing a unique cookie. The
client the responds with a second ClientHello - which this time contains the
cookie.
Once the cookie has been verified then DTLSv1_listen() returns to user code,
which is typically expected to continue the handshake with a call to (for
example) SSL_accept().
Whilst listening for incoming ClientHellos, the underlying BIO is usually in
an unconnected state. Therefore ClientHellos can come in from *any* peer.
The arrival of the first ClientHello without the cookie, and the second one
with it, could be interspersed with other intervening messages from
different clients.
The whole purpose of this mechanism is as a defence against DoS attacks. The
idea is to avoid allocating state on the server until the client has
verified that it is capable of receiving messages at the address it claims
to come from. However the existing DTLSv1_listen() implementation completely
fails to do this. It attempts to super-impose itself on the standard state
machine and reuses all of this code. However the standard state machine
expects to operate in a stateful manner with a single client, and this can
cause various problems.
A second more minor issue is that the return codes from this function are
quite confused, with no distinction made between fatal and non-fatal errors.
Most user code treats all errors as non-fatal, and simply retries the call
to DTLSv1_listen().
This commit completely rewrites the implementation of DTLSv1_listen() and
provides a stand alone implementation that does not rely on the existing
state machine. It also provides more consistent return codes.
Reviewed-by: Andy Polyakov <appro@openssl.org>
The handling of incoming CCS records is a little strange. Since CCS is not
a handshake message it is handled differently to normal handshake messages.
Unfortunately whilst technically it is not a handhshake message the reality
is that it must be processed in accordance with the state of the handshake.
Currently CCS records are processed entirely within the record layer. In
order to ensure that it is handled in accordance with the handshake state
a flag is used to indicate that it is an acceptable time to receive a CCS.
Previously this flag did not exist (see CVE-2014-0224), but the flag should
only really be considered a workaround for the problem that CCS is not
visible to the state machine.
Outgoing CCS messages are already handled within the state machine.
This patch makes CCS visible to the TLS state machine. A separate commit
will handle DTLS.
Reviewed-by: Tim Hudson <tjh@openssl.org>
The underlying field returned by RECORD_LAYER_get_rrec_length() is an
unsigned int. The return type of the function should match that.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Following the version negotiation rewrite all of the previous code that was
dedicated to version negotiation can now be deleted - all six source files
of it!!
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
This commit changes the way that we do server side protocol version
negotiation. Previously we had a whole set of code that had an "up front"
state machine dedicated to the negotiating the protocol version. This adds
significant complexity to the state machine. Historically the justification
for doing this was the support of SSLv2 which works quite differently to
SSLv3+. However, we have now removed support for SSLv2 so there is little
reason to maintain this complexity.
The one slight difficulty is that, although we no longer support SSLv2, we
do still support an SSLv3+ ClientHello in an SSLv2 backward compatible
ClientHello format. This is generally only used by legacy clients. This
commit adds support within the SSLv3 code for these legacy format
ClientHellos.
Server side version negotiation now works in much the same was as DTLS,
i.e. we introduce the concept of TLS_ANY_VERSION. If s->version is set to
that then when a ClientHello is received it will work out the most
appropriate version to respond with. Also, SSLv23_method and
SSLv23_server_method have been replaced with TLS_method and
TLS_server_method respectively. The old SSLv23* names still exist as
macros pointing at the new name, although they are deprecated.
Subsequent commits will look at client side version negotiation, as well of
removal of the old s23* code.
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
There were a set of includes in dtls1.h which are now redundant due to the
libssl opaque work. This commit removes those includes, which also has the
effect of resolving one issue preventing building on windows (i.e. the
include of winsock.h)
Reviewed-by: Andy Polyakov <appro@openssl.org>
Fix some strange formatting in record.h. This was probably originally
introduced as part of the reformat work.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Replace the hard coded value 8 (the size of the sequence number) with a
constant defined in a macro.
Reviewed-by: Richard Levitte <levitte@openssl.org>