All the other functions that take an argument for the number of bytes
use convenience macros for this purpose. We should do the same with
WPACKET_put_bytes().
Reviewed-by: Rich Salz <rsalz@openssl.org>
Makes the logic a little bit clearer.
Reviewed-by: Andy Polyakov <appro@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1571)
This reverts commit 77a6be4dfc.
There were some unexpected side effects to this commit, e.g. in SSLv3 a
warning alert gets sent "no_certificate" if a client does not send a
Certificate during Client Auth. With the above commit this causes the
connection to abort, which is incorrect. There may be some other edge cases
like this so we need to have a rethink on this.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Updated the construction code to use the new function. Also added some
convenience macros for WPACKET_sub_memcpy().
Reviewed-by: Rich Salz <rsalz@openssl.org>
A peer continually sending unrecognised warning alerts could mean that we
make no progress on a connection. We should abort rather than continuing if
we receive an unrecognised warning alert.
Thanks to Shi Lei for reporting this issue.
Reviewed-by: Rich Salz <rsalz@openssl.org>
This is an internal API. Some of the tests were for programmer erorr and
"should not happen" situations, so a soft assert is reasonable.
Reviewed-by: Rich Salz <rsalz@openssl.org>
A few style tweaks here and there. The main change is that curr and
packet_len are now offsets into the buffer to account for the fact that
the pointers can change if the buffer grows. Also dropped support for the
WPACKET_set_packet_len() function. I thought that was going to be needed
but so far it hasn't been. It doesn't really work any more due to the
offsets change.
Reviewed-by: Rich Salz <rsalz@openssl.org>
The PACKET documentation is already in packet_locl.h so it makes sense to
have the WPACKET documentation there as well.
Reviewed-by: Rich Salz <rsalz@openssl.org>
The function tls_construct_cert_status() is called by both TLS and DTLS
code. However it only ever constructed a TLS message header for the message
which obviously failed in DTLS.
Reviewed-by: Rich Salz <rsalz@openssl.org>
It is never valid to call ssl3_read_bytes with
type == SSL3_RT_CHANGE_CIPHER_SPEC, and in fact we check for valid values
for type near the beginning of the function. Therefore this check will never
be true and can be removed.
Reviewed-by: Tim Hudson <tjh@openssl.org>
If a ticket callback changes the HMAC digest to SHA512 the existing
sanity checks are not sufficient and an attacker could perform a DoS
attack with a malformed ticket. Add additional checks based on
HMAC size.
Thanks to Shi Lei for reporting this bug.
CVE-2016-6302
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
Follow on from CVE-2016-2179
The investigation and analysis of CVE-2016-2179 highlighted a related flaw.
This commit fixes a security "near miss" in the buffered message handling
code. Ultimately this is not currently believed to be exploitable due to
the reasons outlined below, and therefore there is no CVE for this on its
own.
The issue this commit fixes is a MITM attack where the attacker can inject
a Finished message into the handshake. In the description below it is
assumed that the attacker injects the Finished message for the server to
receive it. The attack could work equally well the other way around (i.e
where the client receives the injected Finished message).
The MITM requires the following capabilities:
- The ability to manipulate the MTU that the client selects such that it
is small enough for the client to fragment Finished messages.
- The ability to selectively drop and modify records sent from the client
- The ability to inject its own records and send them to the server
The MITM forces the client to select a small MTU such that the client
will fragment the Finished message. Ideally for the attacker the first
fragment will contain all but the last byte of the Finished message,
with the second fragment containing the final byte.
During the handshake and prior to the client sending the CCS the MITM
injects a plaintext Finished message fragment to the server containing
all but the final byte of the Finished message. The message sequence
number should be the one expected to be used for the real Finished message.
OpenSSL will recognise that the received fragment is for the future and
will buffer it for later use.
After the client sends the CCS it then sends its own Finished message in
two fragments. The MITM causes the first of these fragments to be
dropped. The OpenSSL server will then receive the second of the fragments
and reassemble the complete Finished message consisting of the MITM
fragment and the final byte from the real client.
The advantage to the attacker in injecting a Finished message is that
this provides the capability to modify other handshake messages (e.g.
the ClientHello) undetected. A difficulty for the attacker is knowing in
advance what impact any of those changes might have on the final byte of
the handshake hash that is going to be sent in the "real" Finished
message. In the worst case for the attacker this means that only 1 in
256 of such injection attempts will succeed.
It may be possible in some situations for the attacker to improve this such
that all attempts succeed. For example if the handshake includes client
authentication then the final message flight sent by the client will
include a Certificate. Certificates are ASN.1 objects where the signed
portion is DER encoded. The non-signed portion could be BER encoded and so
the attacker could re-encode the certificate such that the hash for the
whole handshake comes to a different value. The certificate re-encoding
would not be detectable because only the non-signed portion is changed. As
this is the final flight of messages sent from the client the attacker
knows what the complete hanshake hash value will be that the client will
send - and therefore knows what the final byte will be. Through a process
of trial and error the attacker can re-encode the certificate until the
modified handhshake also has a hash with the same final byte. This means
that when the Finished message is verified by the server it will be
correct in all cases.
In practice the MITM would need to be able to perform the same attack
against both the client and the server. If the attack is only performed
against the server (say) then the server will not detect the modified
handshake, but the client will and will abort the connection.
Fortunately, although OpenSSL is vulnerable to Finished message
injection, it is not vulnerable if *both* client and server are OpenSSL.
The reason is that OpenSSL has a hard "floor" for a minimum MTU size
that it will never go below. This minimum means that a Finished message
will never be sent in a fragmented form and therefore the MITM does not
have one of its pre-requisites. Therefore this could only be exploited
if using OpenSSL and some other DTLS peer that had its own and separate
Finished message injection flaw.
The fix is to ensure buffered messages are cleared on epoch change.
Reviewed-by: Richard Levitte <levitte@openssl.org>
DTLS can handle out of order record delivery. Additionally since
handshake messages can be bigger than will fit into a single packet, the
messages can be fragmented across multiple records (as with normal TLS).
That means that the messages can arrive mixed up, and we have to
reassemble them. We keep a queue of buffered messages that are "from the
future", i.e. messages we're not ready to deal with yet but have arrived
early. The messages held there may not be full yet - they could be one
or more fragments that are still in the process of being reassembled.
The code assumes that we will eventually complete the reassembly and
when that occurs the complete message is removed from the queue at the
point that we need to use it.
However, DTLS is also tolerant of packet loss. To get around that DTLS
messages can be retransmitted. If we receive a full (non-fragmented)
message from the peer after previously having received a fragment of
that message, then we ignore the message in the queue and just use the
non-fragmented version. At that point the queued message will never get
removed.
Additionally the peer could send "future" messages that we never get to
in order to complete the handshake. Each message has a sequence number
(starting from 0). We will accept a message fragment for the current
message sequence number, or for any sequence up to 10 into the future.
However if the Finished message has a sequence number of 2, anything
greater than that in the queue is just left there.
So, in those two ways we can end up with "orphaned" data in the queue
that will never get removed - except when the connection is closed. At
that point all the queues are flushed.
An attacker could seek to exploit this by filling up the queues with
lots of large messages that are never going to be used in order to
attempt a DoS by memory exhaustion.
I will assume that we are only concerned with servers here. It does not
seem reasonable to be concerned about a memory exhaustion attack on a
client. They are unlikely to process enough connections for this to be
an issue.
A "long" handshake with many messages might be 5 messages long (in the
incoming direction), e.g. ClientHello, Certificate, ClientKeyExchange,
CertificateVerify, Finished. So this would be message sequence numbers 0
to 4. Additionally we can buffer up to 10 messages in the future.
Therefore the maximum number of messages that an attacker could send
that could get orphaned would typically be 15.
The maximum size that a DTLS message is allowed to be is defined by
max_cert_list, which by default is 100k. Therefore the maximum amount of
"orphaned" memory per connection is 1500k.
Message sequence numbers get reset after the Finished message, so
renegotiation will not extend the maximum number of messages that can be
orphaned per connection.
As noted above, the queues do get cleared when the connection is closed.
Therefore in order to mount an effective attack, an attacker would have
to open many simultaneous connections.
Issue reported by Quan Luo.
CVE-2016-2179
Reviewed-by: Richard Levitte <levitte@openssl.org>
The DTLS implementation provides some protection against replay attacks
in accordance with RFC6347 section 4.1.2.6.
A sliding "window" of valid record sequence numbers is maintained with
the "right" hand edge of the window set to the highest sequence number we
have received so far. Records that arrive that are off the "left" hand
edge of the window are rejected. Records within the window are checked
against a list of records received so far. If we already received it then
we also reject the new record.
If we have not already received the record, or the sequence number is off
the right hand edge of the window then we verify the MAC of the record.
If MAC verification fails then we discard the record. Otherwise we mark
the record as received. If the sequence number was off the right hand edge
of the window, then we slide the window along so that the right hand edge
is in line with the newly received sequence number.
Records may arrive for future epochs, i.e. a record from after a CCS being
sent, can arrive before the CCS does if the packets get re-ordered. As we
have not yet received the CCS we are not yet in a position to decrypt or
validate the MAC of those records. OpenSSL places those records on an
unprocessed records queue. It additionally updates the window immediately,
even though we have not yet verified the MAC. This will only occur if
currently in a handshake/renegotiation.
This could be exploited by an attacker by sending a record for the next
epoch (which does not have to decrypt or have a valid MAC), with a very
large sequence number. This means the right hand edge of the window is
moved very far to the right, and all subsequent legitimate packets are
dropped causing a denial of service.
A similar effect can be achieved during the initial handshake. In this
case there is no MAC key negotiated yet. Therefore an attacker can send a
message for the current epoch with a very large sequence number. The code
will process the record as normal. If the hanshake message sequence number
(as opposed to the record sequence number that we have been talking about
so far) is in the future then the injected message is bufferred to be
handled later, but the window is still updated. Therefore all subsequent
legitimate handshake records are dropped. This aspect is not considered a
security issue because there are many ways for an attacker to disrupt the
initial handshake and prevent it from completing successfully (e.g.
injection of a handshake message will cause the Finished MAC to fail and
the handshake to be aborted). This issue comes about as a result of trying
to do replay protection, but having no integrity mechanism in place yet.
Does it even make sense to have replay protection in epoch 0? That
issue isn't addressed here though.
This addressed an OCAP Audit issue.
CVE-2016-2181
Reviewed-by: Richard Levitte <levitte@openssl.org>
During a DTLS handshake we may get records destined for the next epoch
arrive before we have processed the CCS. In that case we can't decrypt or
verify the record yet, so we buffer it for later use. When we do receive
the CCS we work through the queue of unprocessed records and process them.
Unfortunately the act of processing wipes out any existing packet data
that we were still working through. This includes any records from the new
epoch that were in the same packet as the CCS. We should only process the
buffered records if we've not got any data left.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Run util/openssl-format-source on ssl/
Some comments and hand-formatted tables were fixed up
manually by disabling auto-formatting.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Feedback on the previous SSLv2 ClientHello processing fix was that it
breaks layering by reading init_num in the record layer. It also does not
detect if there was a previous non-fatal warning.
This is an alternative approach that directly tracks in the record layer
whether this is the first record.
GitHub Issue #1298
Reviewed-by: Tim Hudson <tjh@openssl.org>
When handling ECDH check to see if the curve is "custom" (X25519 is
currently the only curve of this type) and instead of setting a curve
NID just allocate a key of appropriate type.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Thanks to Peter Gijsels for pointing out that if a CBC record has 255
bytes of padding, the first was not being checked.
(This is an import of change 80842bdb from BoringSSL.)
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1431)
Commit 3eb2aff renamed a field of ssl_cipher_st from algorithm_ssl -> min_tls but neglected to update the fprintf reference which is included by -DCIPHER_DEBUG
Reviewed-by: Richard Levitte <levitte@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1417)
These functions are:
SSL_use_certificate_file
SSL_use_RSAPrivateKey_file
SSL_use_PrivateKey_file
SSL_CTX_use_certificate_file
SSL_CTX_use_RSAPrivateKey_file
SSL_CTX_use_PrivateKey_file
SSL_use_certificate_chain_file
Internally, they use BIO_s_file(), which is defined and implemented at
all times, even when OpenSSL is configured no-stdio.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Baroque, almost uncommented code triggers behaviour which is undefined
by the C standard. You might quite reasonably not care that the code was
broken on ones-complement machines, but if we support a ubsan build then
we need to at least pretend to care.
It looks like the special-case code for 64-bit big-endian is going to
behave differently (and wrongly) on wrap-around, because it treats the
values as signed. That seems wrong, and allows replay and other attacks.
Surely you need to renegotiate and start a new epoch rather than
wrapping around to sequence number zero again?
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
DTLSv1_client_method() is deprecated, but it was the only way to obtain
DTLS1_BAD_VER support. The SSL_OP_CISCO_ANYCONNECT hack doesn't work with
DTLS_client_method(), and it's relatively non-trivial to make it work without
expanding the hack into lots of places.
So deprecate SSL_OP_CISCO_ANYCONNECT with DTLSv1_client_method(), and make
it work with SSL_CTX_set_{min,max}_proto_version(DTLS1_BAD_VER) instead.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
Commit 3eb2aff40 ("Add support for minimum and maximum protocol version
supported by a cipher") disabled all ciphers for DTLS1_BAD_VER.
That wasn't helpful. Give them back.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
DTLS version numbers are strange and backwards, except DTLS1_BAD_VER so
we have to make a special case for it.
This does leave us with a set of macros which will evaluate their arguments
more than once, but it's not a public-facing API and it's not like this is
the kind of thing where people will be using DTLS_VERSION_LE(x++, y) anyway.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
The Change Cipher Spec message in this ancient pre-standard version of DTLS
that Cisco are unfortunately still using in their products, is 3 bytes.
Allow it.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
Commit d8e8590e ("Fix missing return value checks in SCTP") made the
DTLS handshake fail, even for non-SCTP connections, if
SSL_export_keying_material() fails. Which it does, for DTLS1_BAD_VER.
Apply the trivial fix to make it succeed, since there's no real reason
why it shouldn't even though we never need it.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
The MULTIBLOCK code uses a "jumbo" sized write buffer which it allocates
and then frees later. Pipelining however introduced multiple pipelines. It
keeps track of how many pipelines are initialised using numwpipes.
Unfortunately the MULTIBLOCK code was not updating this when in deallocated
its buffers, leading to a buffer being marked as initialised but set to
NULL.
RT#4618
Reviewed-by: Rich Salz <rsalz@openssl.org>
SSL_set_rbio() and SSL_set_wbio() are new functions in 1.1.0 and really
should be called SSL_set0_rbio() and SSL_set0_wbio(). The old
implementation was not consistent with what "set0" means though as there
were special cases around what happens if the rbio and wbio are the same.
We were only ever taking one reference on the BIO, and checking everywhere
whether the rbio and wbio are the same so as not to double free.
A better approach is to rename the functions to SSL_set0_rbio() and
SSL_set0_wbio(). If an existing BIO is present it is *always* freed
regardless of whether the rbio and wbio are the same or not. It is
therefore the callers responsibility to ensure that a reference is taken
for *each* usage, i.e. one for the rbio and one for the wbio.
The legacy function SSL_set_bio() takes both the rbio and wbio in one go
and sets them both. We can wrap up the old behaviour in the implementation
of that function, i.e. previously if the rbio and wbio are the same in the
call to this function then the caller only needed to ensure one reference
was passed. This behaviour is retained by internally upping the ref count.
This commit was inspired by BoringSSL commit f715c423224.
RT#4572
Reviewed-by: Rich Salz <rsalz@openssl.org>
The BIO_pop implementation assumes that the rbio still equals the next BIO
in the chain. While this would normally be the case, it is possible that it
could have been changed directly by the application. It also does not
properly cater for the scenario where the buffering BIO is still in place
for the write BIO.
Most of the existing BIO_pop code for SSL BIOs can be replaced by a single
call to SSL_set_bio(). This is equivalent to the existing code but
additionally handles the scenario where the rbio has been changed or the
buffering BIO is still in place.
Reviewed-by: Rich Salz <rsalz@openssl.org>
When pushing a BIO onto an SSL BIO we set the rbio and wbio for the SSL
object to be the BIO that has been pushed. Therefore we need to up the ref
count for that BIO. The existing code was uping the ref count on the wrong
BIO.
Reviewed-by: Rich Salz <rsalz@openssl.org>
When setting the read bio we free up any old existing one. However this can
lead to a double free if the existing one is the same as the write bio.
Reviewed-by: Rich Salz <rsalz@openssl.org>
SSLv2 is no longer supported in 1.1.0, however we *do* still accept an SSLv2
style ClientHello, as long as we then subsequently negotiate a protocol
version >= SSLv3. The record format for SSLv2 style ClientHellos is quite
different to SSLv3+. We only accept this format in the first record of an
initial ClientHello. Previously we checked this by confirming
s->first_packet is set and s->server is true. However, this really only
tells us that we are dealing with an initial ClientHello, not that it is
the first record (s->first_packet is badly named...it really means this is
the first message). To check this is the first record of the initial
ClientHello we should also check that we've not received any data yet
(s->init_num == 0), and that we've not had any empty records.
GitHub Issue #1298
Reviewed-by: Emilia Käsper <emilia@openssl.org>
This is adapted from BoringSSL commit 2f87112b963.
This fixes a number of bugs where the existence of bbio was leaked in the
public API and broke things.
- SSL_get_wbio returned the bbio during the handshake. It must always return
the BIO the consumer configured. In doing so, some internal accesses of
SSL_get_wbio should be switched to ssl->wbio since those want to see bbio.
- The logic in SSL_set_rfd, etc. (which I doubt is quite right since
SSL_set_bio's lifetime is unclear) would get confused once wbio got
wrapped. Those want to compare to SSL_get_wbio.
- If SSL_set_bio was called mid-handshake, bbio would get disconnected and
lose state. It forgets to reattach the bbio afterwards. Unfortunately,
Conscrypt does this a lot. It just never ended up calling it at a point
where the bbio would cause problems.
- Make more explicit the invariant that any bbio's which exist are always
attached. Simplify a few things as part of that.
RT#4572
Reviewed-by: Richard Levitte <levitte@openssl.org>
Fix some indentation at the same time
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1292)
- Always process ALPN (previously there was an early return in the
certificate status handling)
- Don't send a duplicate alert. Previously, both
ssl_check_clienthello_tlsext_late and its caller would send an
alert. Consolidate alert sending code in the caller.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Continuing from the previous commit. Refactor tls_process_key_exchange() to
split out into a separate function the ECDHE aspects.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commit. Refactor tls_process_key_exchange() to
split out into a separate function the DHE aspects.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commit. Refactor tls_process_key_exchange() to
split out into a separate function the SRP aspects.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The tls_process_key_exchange() function is too long. This commit starts
the process of splitting it up by moving the PSK preamble code to a
separate function.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The function tls_process_key_exchange() is too long. This commit moves
the PSK preamble processing out to a separate function.
Reviewed-by: Richard Levitte <levitte@openssl.org>
If the SSL_SESS_CACHE_NO_INTERNAL_STORE cache mode is used then we weren't
removing sessions from the external cache, e.g. if an alert occurs the
session is supposed to be automatically removed.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Don't call strncpy with strlen of the source as the length. Don't call
strlen multiple times. Eventually we will want to replace this with a proper
PACKET style handling (but for construction of PACKETs instead of just
reading them as it is now). For now though this is safe because
PSK_MAX_IDENTITY_LEN will always fit into the destination buffer.
This addresses an OCAP Audit issue.
Reviewed-by: Richard Levitte <levitte@openssl.org>