Fix some comments too
[skip ci]
Reviewed-by: Tim Hudson <tjh@openssl.org>
Reviewed-by: Richard Levitte <levitte@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/3069)
Found using various (old-ish) versions of gcc.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Richard Levitte <levitte@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/2940)
The value of SSL3_RT_MAX_ENCRYPTED_LENGTH normally includes the compression
overhead (even if no compression is negotiated for a connection). Except in
a build where no-comp is used the value of SSL3_RT_MAX_ENCRYPTED_LENGTH does
not include the compression overhead.
Reviewed-by: Richard Levitte <levitte@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/2872)
We also skip any early_data that subsequently gets sent. Later commits will
process it if we can.
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/2737)
This removes the fips configure option. This option is broken as the
required FIPS code is not available.
FIPS_mode() and FIPS_mode_set() are retained for compatibility, but
FIPS_mode() always returns 0, and FIPS_mode_set() can only be used to
turn FIPS mode off.
Reviewed-by: Stephen Henson <steve@openssl.org>
Following on from CVE-2017-3733, this removes the OPENSSL_assert() check
that failed and replaces it with a soft assert, and an explicit check of
value with an error return if it fails.
Reviewed-by: Richard Levitte <levitte@openssl.org>
In 1.1.0 changing the ciphersuite during a renegotiation can result in
a crash leading to a DoS attack. In master this does not occur with TLS
(instead you get an internal error, which is still wrong but not a security
issue) - but the problem still exists in the DTLS code.
The problem is caused by changing the flag indicating whether to use ETM
or not immediately on negotiation of ETM, rather than at CCS. Therefore,
during a renegotiation, if the ETM state is changing (usually due to a
change of ciphersuite), then an error/crash will occur.
Due to the fact that there are separate CCS messages for read and write
we actually now need two flags to determine whether to use ETM or not.
CVE-2017-3733
Reviewed-by: Richard Levitte <levitte@openssl.org>
This comes from a comment in GH issue #1027. Andy wrote the code,
Rich made the PR.
Reviewed-by: Andy Polyakov <appro@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/2253)
At the moment the msg callback only received the record header with the
outer record type in it. We never pass the inner record type - we probably
need to at some point.
Reviewed-by: Rich Salz <rsalz@openssl.org>
OpenSSL 1.1.0 will negotiate EtM on DTLS but will then not actually *do* it.
If we use DTLSv1.2 that will hopefully be harmless since we'll tend to use
an AEAD ciphersuite anyway. But if we're using DTLSv1, then we certainly
will end up using CBC, so EtM is relevant — and we fail to interoperate with
anything that implements EtM correctly.
Fixing it in HEAD and 1.1.0c will mean that 1.1.0[ab] are incompatible with
1.1.0c+... for the limited case of non-AEAD ciphers, where they're *already*
incompatible with other implementations due to this bug anyway. That seems
reasonable enough, so let's do it. The only alternative is just to turn it
off for ever... which *still* leaves 1.0.0[ab] failing to communicate with
non-OpenSSL implementations anyway.
Tested against itself as well as against GnuTLS both with and without EtM.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
The DTLS implementation provides some protection against replay attacks
in accordance with RFC6347 section 4.1.2.6.
A sliding "window" of valid record sequence numbers is maintained with
the "right" hand edge of the window set to the highest sequence number we
have received so far. Records that arrive that are off the "left" hand
edge of the window are rejected. Records within the window are checked
against a list of records received so far. If we already received it then
we also reject the new record.
If we have not already received the record, or the sequence number is off
the right hand edge of the window then we verify the MAC of the record.
If MAC verification fails then we discard the record. Otherwise we mark
the record as received. If the sequence number was off the right hand edge
of the window, then we slide the window along so that the right hand edge
is in line with the newly received sequence number.
Records may arrive for future epochs, i.e. a record from after a CCS being
sent, can arrive before the CCS does if the packets get re-ordered. As we
have not yet received the CCS we are not yet in a position to decrypt or
validate the MAC of those records. OpenSSL places those records on an
unprocessed records queue. It additionally updates the window immediately,
even though we have not yet verified the MAC. This will only occur if
currently in a handshake/renegotiation.
This could be exploited by an attacker by sending a record for the next
epoch (which does not have to decrypt or have a valid MAC), with a very
large sequence number. This means the right hand edge of the window is
moved very far to the right, and all subsequent legitimate packets are
dropped causing a denial of service.
A similar effect can be achieved during the initial handshake. In this
case there is no MAC key negotiated yet. Therefore an attacker can send a
message for the current epoch with a very large sequence number. The code
will process the record as normal. If the hanshake message sequence number
(as opposed to the record sequence number that we have been talking about
so far) is in the future then the injected message is bufferred to be
handled later, but the window is still updated. Therefore all subsequent
legitimate handshake records are dropped. This aspect is not considered a
security issue because there are many ways for an attacker to disrupt the
initial handshake and prevent it from completing successfully (e.g.
injection of a handshake message will cause the Finished MAC to fail and
the handshake to be aborted). This issue comes about as a result of trying
to do replay protection, but having no integrity mechanism in place yet.
Does it even make sense to have replay protection in epoch 0? That
issue isn't addressed here though.
This addressed an OCAP Audit issue.
CVE-2016-2181
Reviewed-by: Richard Levitte <levitte@openssl.org>
During a DTLS handshake we may get records destined for the next epoch
arrive before we have processed the CCS. In that case we can't decrypt or
verify the record yet, so we buffer it for later use. When we do receive
the CCS we work through the queue of unprocessed records and process them.
Unfortunately the act of processing wipes out any existing packet data
that we were still working through. This includes any records from the new
epoch that were in the same packet as the CCS. We should only process the
buffered records if we've not got any data left.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Run util/openssl-format-source on ssl/
Some comments and hand-formatted tables were fixed up
manually by disabling auto-formatting.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Feedback on the previous SSLv2 ClientHello processing fix was that it
breaks layering by reading init_num in the record layer. It also does not
detect if there was a previous non-fatal warning.
This is an alternative approach that directly tracks in the record layer
whether this is the first record.
GitHub Issue #1298
Reviewed-by: Tim Hudson <tjh@openssl.org>
Thanks to Peter Gijsels for pointing out that if a CBC record has 255
bytes of padding, the first was not being checked.
(This is an import of change 80842bdb from BoringSSL.)
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1431)
SSLv2 is no longer supported in 1.1.0, however we *do* still accept an SSLv2
style ClientHello, as long as we then subsequently negotiate a protocol
version >= SSLv3. The record format for SSLv2 style ClientHellos is quite
different to SSLv3+. We only accept this format in the first record of an
initial ClientHello. Previously we checked this by confirming
s->first_packet is set and s->server is true. However, this really only
tells us that we are dealing with an initial ClientHello, not that it is
the first record (s->first_packet is badly named...it really means this is
the first message). To check this is the first record of the initial
ClientHello we should also check that we've not received any data yet
(s->init_num == 0), and that we've not had any empty records.
GitHub Issue #1298
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Fix some indentation at the same time
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1292)
Previously if we received an empty record we just threw it away and
ignored it. Really though if we get an empty record of a different content
type to what we are expecting then that should be an error, i.e. we should
reject out of context empty records. This commit makes the necessary changes
to achieve that.
RT#4395
Reviewed-by: Andy Polyakov <appro@openssl.org>
In the SSLV2ClientHello processing code in ssl3_get_record, the value of
|num_recs| will always be 0. This isn't obvious from the code so a comment
is added to explain it.
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
The function ssl3_get_record() can obtain multiple records in one go
as long as we are set up for pipelining and all the records are app
data records. The logic in the while loop which reads in each record is
supposed to only continue looping if the last record we read was app data
and we have an app data record waiting in the buffer to be processed. It
was actually checking that the first record had app data and we have an
app data record waiting. This actually amounts to the same thing so wasn't
wrong - but it looks a bit odd because it uses the |rr| array without an
offset.
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
Pipelining introduced the concept of multiple records being read in one
go. Therefore we work with an array of SSL3_RECORD objects. The pipelining
change erroneously made a change in ssl3_get_record() to apply the current
record offset to the SSL3_BUFFER we are using for reading. This is wrong -
there is only ever one read buffer. This reverts that change. In practice
this should make little difference because the code block in question is
only ever used when we are processing a single record.
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>