TLS1.0 and TLS1.1 say you SHOULD ignore unrecognised record types, but
TLS 1.2 says you MUST send an unexpected message alert. We swap to the
TLS 1.2 behaviour for all protocol versions to prevent issues where no
progress is being made and the peer continually sends unrecognised record
types, using up resources processing them.
Issue reported by 郭志攀
Reviewed-by: Tim Hudson <tjh@openssl.org>
The function ssl3_read_n() takes a parameter |clearold| which, if set,
causes any old data in the read buffer to be forgotten, and any unread data
to be moved to the start of the buffer. This is supposed to happen when we
first read the record header.
However, the data move was only taking place if there was not already
sufficient data in the buffer to satisfy the request. If read_ahead is set
then the record header could be in the buffer already from when we read the
preceding record. So with read_ahead we can get into a situation where even
though |clearold| is set, the data does not get moved to the start of the
read buffer when we read the record header. This means there is insufficient
room in the read buffer to consume the rest of the record body, resulting in
an internal error.
This commit moves the |clearold| processing to earlier in ssl3_read_n()
to ensure that it always takes place.
Reviewed-by: Richard Levitte <levitte@openssl.org>
We add ssl_cipher_get_overhead() as an internal function, to avoid
having too much ciphersuite-specific knowledge in DTLS_get_data_mtu()
itself. It's going to need adjustment for TLSv1.3... but then again, so
is fairly much *all* of the SSL_CIPHER handling. This bit is in the noise.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
When matching a ciphersuite if we are given an id, make sure we use it
otherwise we will match another ciphersuite which is identical except for
the TLS version.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Includes addition of the various options to s_server/s_client. Also adds
one of the new TLS1.3 ciphersuites.
This isn't "real" TLS1.3!! It's identical to TLS1.2 apart from the protocol
and the ciphersuite...and the ciphersuite is just a renamed TLS1.2 one (not
a "real" TLS1.3 ciphersuite).
Reviewed-by: Rich Salz <rsalz@openssl.org>
For convenience, combine getting a new ref for the new SSL_CTX
with assigning the store and freeing the old one.
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1755)
A zero return from BIO_read()/BIO_write() could mean that an IO operation
is retryable. A zero return from SSL_read()/SSL_write() means that the
connection has been closed down (either cleanly or not). Therefore we
should not propagate a zero return value from BIO_read()/BIO_write() back
up the stack to SSL_read()/SSL_write(). This could result in a retryable
failure being treated as fatal.
Reviewed-by: Richard Levitte <levitte@openssl.org>
OpenSSL 1.1.0 will negotiate EtM on DTLS but will then not actually *do* it.
If we use DTLSv1.2 that will hopefully be harmless since we'll tend to use
an AEAD ciphersuite anyway. But if we're using DTLSv1, then we certainly
will end up using CBC, so EtM is relevant — and we fail to interoperate with
anything that implements EtM correctly.
Fixing it in HEAD and 1.1.0c will mean that 1.1.0[ab] are incompatible with
1.1.0c+... for the limited case of non-AEAD ciphers, where they're *already*
incompatible with other implementations due to this bug anyway. That seems
reasonable enough, so let's do it. The only alternative is just to turn it
off for ever... which *still* leaves 1.0.0[ab] failing to communicate with
non-OpenSSL implementations anyway.
Tested against itself as well as against GnuTLS both with and without EtM.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
The prevailing style seems to not have trailing whitespace, but a few
lines do. This is mostly in the perlasm files, but a few C files got
them after the reformat. This is the result of:
find . -name '*.pl' | xargs sed -E -i '' -e 's/( |'$'\t'')*$//'
find . -name '*.c' | xargs sed -E -i '' -e 's/( |'$'\t'')*$//'
find . -name '*.h' | xargs sed -E -i '' -e 's/( |'$'\t'')*$//'
Then bn_prime.h was excluded since this is a generated file.
Note mkerr.pl has some changes in a heredoc for some help output, but
other lines there lack trailing whitespace too.
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
We now set the handshake header, and close the packet directly in the
write_state_machine. This is now possible because it is common for all
messages.
Reviewed-by: Rich Salz <rsalz@openssl.org>
tls_construct_finished() used to have different arguments to all of the
other construction functions. It doesn't anymore, so there is no neeed to
treat it as a special case.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Ensure all message types work the same way including CCS so that the state
machine doesn't need to know about special cases. Put all the special logic
into ssl_set_handshake_header() and ssl_close_construct_packet().
Reviewed-by: Rich Salz <rsalz@openssl.org>
Instead of initialising, finishing and cleaning up the WPACKET in every
message construction function, we should do it once in
write_state_machine().
Reviewed-by: Rich Salz <rsalz@openssl.org>
ssl_set_handshake_header2() was only ever a temporary name while we had
to have ssl_set_handshake_header() for code that hadn't been converted to
WPACKET yet. No code remains that needed that so we can rename it.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Remove the old ssl_set_handshake_header() implementations. Later we will
rename ssl_set_handshake_header2() to ssl_set_handshake_header().
Reviewed-by: Rich Salz <rsalz@openssl.org>
In plain PSK we don't need to do anymore construction after the preamble.
We weren't detecting this case and treating it as an unknown cipher.
Reviewed-by: Rich Salz <rsalz@openssl.org>
WPACKET_allocate_bytes() requires you to know the size of the data you
are allocating for, before you create it. Sometimes this isn't the case,
for example we know the maximum size that a signature will be before we
create it, but not the actual size. WPACKET_reserve_bytes() enables us to
reserve bytes in the WPACKET, but not count them as written yet. We then
subsequently need to acall WPACKET_allocate_bytes to actually count them as
written.
Reviewed-by: Rich Salz <rsalz@openssl.org>
This was a temporary function needed during the conversion to WPACKET. All
callers have now been converted to the new way of doing this so this
function is no longer required.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Some functions were being called from both code that used WPACKETs and code
that did not. Now that more code has been converted to use WPACKETs some of
that duplication can be removed.
Reviewed-by: Rich Salz <rsalz@openssl.org>
If we have a handshake fragment waiting then dtls1_read_bytes() was not
correctly setting the value of recvd_type, leading to an uninit read.
Reviewed-by: Rich Salz <rsalz@openssl.org>
The buffer to receive messages is initialised to 16k. If a message is
received that is larger than that then the buffer is "realloc'd". This can
cause the location of the underlying buffer to change. Anything that is
referring to the old location will be referring to free'd data. In the
recent commit c1ef7c97 (master) and 4b390b6c (1.1.0) the point in the code
where the message buffer is grown was changed. However s->init_msg was not
updated to point at the new location.
CVE-2016-6309
Reviewed-by: Emilia Käsper <emilia@openssl.org>
If we request more bytes to be allocated than double what we have already
written, then we grow the buffer by the wrong amount.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
We actually construct a HelloVerifyRequest in two places with common code
pulled into a single function. This one commit handles both places.
Reviewed-by: Rich Salz <rsalz@openssl.org>
If the underlying BUF_MEM gets realloc'd then the pointer returned could
become invalid. Therefore we should always ensure that the allocated
memory is filled in prior to any more WPACKET_* calls.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Russian GOST ciphersuites are vulnerable to the KCI attack because they use
long-term keys to establish the connection when ssl client authorization is
on. This change brings the GOST implementation into line with the latest
specs in order to avoid the attack. It should not break backwards
compatibility.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
If while calling SSL_peek() we read an empty record then we go into an
infinite loop, continually trying to read data from the empty record and
never making any progress. This could be exploited by a malicious peer in
a Denial Of Service attack.
CVE-2016-6305
GitHub Issue #1563
Reviewed-by: Rich Salz <rsalz@openssl.org>
If a server sent multiple NPN extensions in a single ClientHello then a
mem leak can occur. This will only happen where the client has requested
NPN in the first place. It does not occur during renegotiation. Therefore
the maximum that could be leaked in a single connection with a malicious
server is 64k (the maximum size of the ServerHello extensions section). As
this is client side, only occurs if NPN has been requested and does not
occur during renegotiation this is unlikely to be exploitable.
Issue reported by Shi Lei.
Reviewed-by: Rich Salz <rsalz@openssl.org>
A malicious client can send an excessively large OCSP Status Request
extension. If that client continually requests renegotiation,
sending a large OCSP Status Request extension each time, then there will
be unbounded memory growth on the server. This will eventually lead to a
Denial Of Service attack through memory exhaustion. Servers with a
default configuration are vulnerable even if they do not support OCSP.
Builds using the "no-ocsp" build time option are not affected.
I have also checked other extensions to see if they suffer from a similar
problem but I could not find any other issues.
CVE-2016-6304
Issue reported by Shi Lei.
Reviewed-by: Rich Salz <rsalz@openssl.org>
This issue is very similar to CVE-2016-6307 described in the previous
commit. The underlying defect is different but the security analysis and
impacts are the same except that it impacts DTLS.
A DTLS message includes 3 bytes for its length in the header for the
message.
This would allow for messages up to 16Mb in length. Messages of this length
are excessive and OpenSSL includes a check to ensure that a peer is sending
reasonably sized messages in order to avoid too much memory being consumed
to service a connection. A flaw in the logic of version 1.1.0 means that
memory for the message is allocated too early, prior to the excessive
message length check. Due to way memory is allocated in OpenSSL this could
mean an attacker could force up to 21Mb to be allocated to service a
connection. This could lead to a Denial of Service through memory
exhaustion. However, the excessive message length check still takes place,
and this would cause the connection to immediately fail. Assuming that the
application calls SSL_free() on the failed conneciton in a timely manner
then the 21Mb of allocated memory will then be immediately freed again.
Therefore the excessive memory allocation will be transitory in nature.
This then means that there is only a security impact if:
1) The application does not call SSL_free() in a timely manner in the
event that the connection fails
or
2) The application is working in a constrained environment where there
is very little free memory
or
3) The attacker initiates multiple connection attempts such that there
are multiple connections in a state where memory has been allocated for
the connection; SSL_free() has not yet been called; and there is
insufficient memory to service the multiple requests.
Except in the instance of (1) above any Denial Of Service is likely to
be transitory because as soon as the connection fails the memory is
subsequently freed again in the SSL_free() call. However there is an
increased risk during this period of application crashes due to the lack
of memory - which would then mean a more serious Denial of Service.
This issue does not affect TLS users.
Issue was reported by Shi Lei (Gear Team, Qihoo 360 Inc.).
CVE-2016-6308
Reviewed-by: Richard Levitte <levitte@openssl.org>
A TLS message includes 3 bytes for its length in the header for the message.
This would allow for messages up to 16Mb in length. Messages of this length
are excessive and OpenSSL includes a check to ensure that a peer is sending
reasonably sized messages in order to avoid too much memory being consumed
to service a connection. A flaw in the logic of version 1.1.0 means that
memory for the message is allocated too early, prior to the excessive
message length check. Due to way memory is allocated in OpenSSL this could
mean an attacker could force up to 21Mb to be allocated to service a
connection. This could lead to a Denial of Service through memory
exhaustion. However, the excessive message length check still takes place,
and this would cause the connection to immediately fail. Assuming that the
application calls SSL_free() on the failed conneciton in a timely manner
then the 21Mb of allocated memory will then be immediately freed again.
Therefore the excessive memory allocation will be transitory in nature.
This then means that there is only a security impact if:
1) The application does not call SSL_free() in a timely manner in the
event that the connection fails
or
2) The application is working in a constrained environment where there
is very little free memory
or
3) The attacker initiates multiple connection attempts such that there
are multiple connections in a state where memory has been allocated for
the connection; SSL_free() has not yet been called; and there is
insufficient memory to service the multiple requests.
Except in the instance of (1) above any Denial Of Service is likely to
be transitory because as soon as the connection fails the memory is
subsequently freed again in the SSL_free() call. However there is an
increased risk during this period of application crashes due to the lack
of memory - which would then mean a more serious Denial of Service.
This issue does not affect DTLS users.
Issue was reported by Shi Lei (Gear Team, Qihoo 360 Inc.).
CVE-2016-6307
Reviewed-by: Richard Levitte <levitte@openssl.org>
Certain warning alerts are ignored if they are received. This can mean that
no progress will be made if one peer continually sends those warning alerts.
Implement a count so that we abort the connection if we receive too many.
Issue reported by Shi Lei.
Reviewed-by: Rich Salz <rsalz@openssl.org>
All the other functions that take an argument for the number of bytes
use convenience macros for this purpose. We should do the same with
WPACKET_put_bytes().
Reviewed-by: Rich Salz <rsalz@openssl.org>
Makes the logic a little bit clearer.
Reviewed-by: Andy Polyakov <appro@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1571)
This reverts commit 77a6be4dfc.
There were some unexpected side effects to this commit, e.g. in SSLv3 a
warning alert gets sent "no_certificate" if a client does not send a
Certificate during Client Auth. With the above commit this causes the
connection to abort, which is incorrect. There may be some other edge cases
like this so we need to have a rethink on this.
Reviewed-by: Tim Hudson <tjh@openssl.org>
Updated the construction code to use the new function. Also added some
convenience macros for WPACKET_sub_memcpy().
Reviewed-by: Rich Salz <rsalz@openssl.org>
A peer continually sending unrecognised warning alerts could mean that we
make no progress on a connection. We should abort rather than continuing if
we receive an unrecognised warning alert.
Thanks to Shi Lei for reporting this issue.
Reviewed-by: Rich Salz <rsalz@openssl.org>
This is an internal API. Some of the tests were for programmer erorr and
"should not happen" situations, so a soft assert is reasonable.
Reviewed-by: Rich Salz <rsalz@openssl.org>
A few style tweaks here and there. The main change is that curr and
packet_len are now offsets into the buffer to account for the fact that
the pointers can change if the buffer grows. Also dropped support for the
WPACKET_set_packet_len() function. I thought that was going to be needed
but so far it hasn't been. It doesn't really work any more due to the
offsets change.
Reviewed-by: Rich Salz <rsalz@openssl.org>
The PACKET documentation is already in packet_locl.h so it makes sense to
have the WPACKET documentation there as well.
Reviewed-by: Rich Salz <rsalz@openssl.org>
The function tls_construct_cert_status() is called by both TLS and DTLS
code. However it only ever constructed a TLS message header for the message
which obviously failed in DTLS.
Reviewed-by: Rich Salz <rsalz@openssl.org>
It is never valid to call ssl3_read_bytes with
type == SSL3_RT_CHANGE_CIPHER_SPEC, and in fact we check for valid values
for type near the beginning of the function. Therefore this check will never
be true and can be removed.
Reviewed-by: Tim Hudson <tjh@openssl.org>
If a ticket callback changes the HMAC digest to SHA512 the existing
sanity checks are not sufficient and an attacker could perform a DoS
attack with a malformed ticket. Add additional checks based on
HMAC size.
Thanks to Shi Lei for reporting this bug.
CVE-2016-6302
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
Follow on from CVE-2016-2179
The investigation and analysis of CVE-2016-2179 highlighted a related flaw.
This commit fixes a security "near miss" in the buffered message handling
code. Ultimately this is not currently believed to be exploitable due to
the reasons outlined below, and therefore there is no CVE for this on its
own.
The issue this commit fixes is a MITM attack where the attacker can inject
a Finished message into the handshake. In the description below it is
assumed that the attacker injects the Finished message for the server to
receive it. The attack could work equally well the other way around (i.e
where the client receives the injected Finished message).
The MITM requires the following capabilities:
- The ability to manipulate the MTU that the client selects such that it
is small enough for the client to fragment Finished messages.
- The ability to selectively drop and modify records sent from the client
- The ability to inject its own records and send them to the server
The MITM forces the client to select a small MTU such that the client
will fragment the Finished message. Ideally for the attacker the first
fragment will contain all but the last byte of the Finished message,
with the second fragment containing the final byte.
During the handshake and prior to the client sending the CCS the MITM
injects a plaintext Finished message fragment to the server containing
all but the final byte of the Finished message. The message sequence
number should be the one expected to be used for the real Finished message.
OpenSSL will recognise that the received fragment is for the future and
will buffer it for later use.
After the client sends the CCS it then sends its own Finished message in
two fragments. The MITM causes the first of these fragments to be
dropped. The OpenSSL server will then receive the second of the fragments
and reassemble the complete Finished message consisting of the MITM
fragment and the final byte from the real client.
The advantage to the attacker in injecting a Finished message is that
this provides the capability to modify other handshake messages (e.g.
the ClientHello) undetected. A difficulty for the attacker is knowing in
advance what impact any of those changes might have on the final byte of
the handshake hash that is going to be sent in the "real" Finished
message. In the worst case for the attacker this means that only 1 in
256 of such injection attempts will succeed.
It may be possible in some situations for the attacker to improve this such
that all attempts succeed. For example if the handshake includes client
authentication then the final message flight sent by the client will
include a Certificate. Certificates are ASN.1 objects where the signed
portion is DER encoded. The non-signed portion could be BER encoded and so
the attacker could re-encode the certificate such that the hash for the
whole handshake comes to a different value. The certificate re-encoding
would not be detectable because only the non-signed portion is changed. As
this is the final flight of messages sent from the client the attacker
knows what the complete hanshake hash value will be that the client will
send - and therefore knows what the final byte will be. Through a process
of trial and error the attacker can re-encode the certificate until the
modified handhshake also has a hash with the same final byte. This means
that when the Finished message is verified by the server it will be
correct in all cases.
In practice the MITM would need to be able to perform the same attack
against both the client and the server. If the attack is only performed
against the server (say) then the server will not detect the modified
handshake, but the client will and will abort the connection.
Fortunately, although OpenSSL is vulnerable to Finished message
injection, it is not vulnerable if *both* client and server are OpenSSL.
The reason is that OpenSSL has a hard "floor" for a minimum MTU size
that it will never go below. This minimum means that a Finished message
will never be sent in a fragmented form and therefore the MITM does not
have one of its pre-requisites. Therefore this could only be exploited
if using OpenSSL and some other DTLS peer that had its own and separate
Finished message injection flaw.
The fix is to ensure buffered messages are cleared on epoch change.
Reviewed-by: Richard Levitte <levitte@openssl.org>
DTLS can handle out of order record delivery. Additionally since
handshake messages can be bigger than will fit into a single packet, the
messages can be fragmented across multiple records (as with normal TLS).
That means that the messages can arrive mixed up, and we have to
reassemble them. We keep a queue of buffered messages that are "from the
future", i.e. messages we're not ready to deal with yet but have arrived
early. The messages held there may not be full yet - they could be one
or more fragments that are still in the process of being reassembled.
The code assumes that we will eventually complete the reassembly and
when that occurs the complete message is removed from the queue at the
point that we need to use it.
However, DTLS is also tolerant of packet loss. To get around that DTLS
messages can be retransmitted. If we receive a full (non-fragmented)
message from the peer after previously having received a fragment of
that message, then we ignore the message in the queue and just use the
non-fragmented version. At that point the queued message will never get
removed.
Additionally the peer could send "future" messages that we never get to
in order to complete the handshake. Each message has a sequence number
(starting from 0). We will accept a message fragment for the current
message sequence number, or for any sequence up to 10 into the future.
However if the Finished message has a sequence number of 2, anything
greater than that in the queue is just left there.
So, in those two ways we can end up with "orphaned" data in the queue
that will never get removed - except when the connection is closed. At
that point all the queues are flushed.
An attacker could seek to exploit this by filling up the queues with
lots of large messages that are never going to be used in order to
attempt a DoS by memory exhaustion.
I will assume that we are only concerned with servers here. It does not
seem reasonable to be concerned about a memory exhaustion attack on a
client. They are unlikely to process enough connections for this to be
an issue.
A "long" handshake with many messages might be 5 messages long (in the
incoming direction), e.g. ClientHello, Certificate, ClientKeyExchange,
CertificateVerify, Finished. So this would be message sequence numbers 0
to 4. Additionally we can buffer up to 10 messages in the future.
Therefore the maximum number of messages that an attacker could send
that could get orphaned would typically be 15.
The maximum size that a DTLS message is allowed to be is defined by
max_cert_list, which by default is 100k. Therefore the maximum amount of
"orphaned" memory per connection is 1500k.
Message sequence numbers get reset after the Finished message, so
renegotiation will not extend the maximum number of messages that can be
orphaned per connection.
As noted above, the queues do get cleared when the connection is closed.
Therefore in order to mount an effective attack, an attacker would have
to open many simultaneous connections.
Issue reported by Quan Luo.
CVE-2016-2179
Reviewed-by: Richard Levitte <levitte@openssl.org>
The DTLS implementation provides some protection against replay attacks
in accordance with RFC6347 section 4.1.2.6.
A sliding "window" of valid record sequence numbers is maintained with
the "right" hand edge of the window set to the highest sequence number we
have received so far. Records that arrive that are off the "left" hand
edge of the window are rejected. Records within the window are checked
against a list of records received so far. If we already received it then
we also reject the new record.
If we have not already received the record, or the sequence number is off
the right hand edge of the window then we verify the MAC of the record.
If MAC verification fails then we discard the record. Otherwise we mark
the record as received. If the sequence number was off the right hand edge
of the window, then we slide the window along so that the right hand edge
is in line with the newly received sequence number.
Records may arrive for future epochs, i.e. a record from after a CCS being
sent, can arrive before the CCS does if the packets get re-ordered. As we
have not yet received the CCS we are not yet in a position to decrypt or
validate the MAC of those records. OpenSSL places those records on an
unprocessed records queue. It additionally updates the window immediately,
even though we have not yet verified the MAC. This will only occur if
currently in a handshake/renegotiation.
This could be exploited by an attacker by sending a record for the next
epoch (which does not have to decrypt or have a valid MAC), with a very
large sequence number. This means the right hand edge of the window is
moved very far to the right, and all subsequent legitimate packets are
dropped causing a denial of service.
A similar effect can be achieved during the initial handshake. In this
case there is no MAC key negotiated yet. Therefore an attacker can send a
message for the current epoch with a very large sequence number. The code
will process the record as normal. If the hanshake message sequence number
(as opposed to the record sequence number that we have been talking about
so far) is in the future then the injected message is bufferred to be
handled later, but the window is still updated. Therefore all subsequent
legitimate handshake records are dropped. This aspect is not considered a
security issue because there are many ways for an attacker to disrupt the
initial handshake and prevent it from completing successfully (e.g.
injection of a handshake message will cause the Finished MAC to fail and
the handshake to be aborted). This issue comes about as a result of trying
to do replay protection, but having no integrity mechanism in place yet.
Does it even make sense to have replay protection in epoch 0? That
issue isn't addressed here though.
This addressed an OCAP Audit issue.
CVE-2016-2181
Reviewed-by: Richard Levitte <levitte@openssl.org>
During a DTLS handshake we may get records destined for the next epoch
arrive before we have processed the CCS. In that case we can't decrypt or
verify the record yet, so we buffer it for later use. When we do receive
the CCS we work through the queue of unprocessed records and process them.
Unfortunately the act of processing wipes out any existing packet data
that we were still working through. This includes any records from the new
epoch that were in the same packet as the CCS. We should only process the
buffered records if we've not got any data left.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Run util/openssl-format-source on ssl/
Some comments and hand-formatted tables were fixed up
manually by disabling auto-formatting.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Feedback on the previous SSLv2 ClientHello processing fix was that it
breaks layering by reading init_num in the record layer. It also does not
detect if there was a previous non-fatal warning.
This is an alternative approach that directly tracks in the record layer
whether this is the first record.
GitHub Issue #1298
Reviewed-by: Tim Hudson <tjh@openssl.org>
When handling ECDH check to see if the curve is "custom" (X25519 is
currently the only curve of this type) and instead of setting a curve
NID just allocate a key of appropriate type.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Thanks to Peter Gijsels for pointing out that if a CBC record has 255
bytes of padding, the first was not being checked.
(This is an import of change 80842bdb from BoringSSL.)
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1431)
Commit 3eb2aff renamed a field of ssl_cipher_st from algorithm_ssl -> min_tls but neglected to update the fprintf reference which is included by -DCIPHER_DEBUG
Reviewed-by: Richard Levitte <levitte@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1417)
These functions are:
SSL_use_certificate_file
SSL_use_RSAPrivateKey_file
SSL_use_PrivateKey_file
SSL_CTX_use_certificate_file
SSL_CTX_use_RSAPrivateKey_file
SSL_CTX_use_PrivateKey_file
SSL_use_certificate_chain_file
Internally, they use BIO_s_file(), which is defined and implemented at
all times, even when OpenSSL is configured no-stdio.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Baroque, almost uncommented code triggers behaviour which is undefined
by the C standard. You might quite reasonably not care that the code was
broken on ones-complement machines, but if we support a ubsan build then
we need to at least pretend to care.
It looks like the special-case code for 64-bit big-endian is going to
behave differently (and wrongly) on wrap-around, because it treats the
values as signed. That seems wrong, and allows replay and other attacks.
Surely you need to renegotiate and start a new epoch rather than
wrapping around to sequence number zero again?
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
DTLSv1_client_method() is deprecated, but it was the only way to obtain
DTLS1_BAD_VER support. The SSL_OP_CISCO_ANYCONNECT hack doesn't work with
DTLS_client_method(), and it's relatively non-trivial to make it work without
expanding the hack into lots of places.
So deprecate SSL_OP_CISCO_ANYCONNECT with DTLSv1_client_method(), and make
it work with SSL_CTX_set_{min,max}_proto_version(DTLS1_BAD_VER) instead.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
Commit 3eb2aff40 ("Add support for minimum and maximum protocol version
supported by a cipher") disabled all ciphers for DTLS1_BAD_VER.
That wasn't helpful. Give them back.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
DTLS version numbers are strange and backwards, except DTLS1_BAD_VER so
we have to make a special case for it.
This does leave us with a set of macros which will evaluate their arguments
more than once, but it's not a public-facing API and it's not like this is
the kind of thing where people will be using DTLS_VERSION_LE(x++, y) anyway.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
The Change Cipher Spec message in this ancient pre-standard version of DTLS
that Cisco are unfortunately still using in their products, is 3 bytes.
Allow it.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
Commit d8e8590e ("Fix missing return value checks in SCTP") made the
DTLS handshake fail, even for non-SCTP connections, if
SSL_export_keying_material() fails. Which it does, for DTLS1_BAD_VER.
Apply the trivial fix to make it succeed, since there's no real reason
why it shouldn't even though we never need it.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Matt Caswell <matt@openssl.org>
The MULTIBLOCK code uses a "jumbo" sized write buffer which it allocates
and then frees later. Pipelining however introduced multiple pipelines. It
keeps track of how many pipelines are initialised using numwpipes.
Unfortunately the MULTIBLOCK code was not updating this when in deallocated
its buffers, leading to a buffer being marked as initialised but set to
NULL.
RT#4618
Reviewed-by: Rich Salz <rsalz@openssl.org>
SSL_set_rbio() and SSL_set_wbio() are new functions in 1.1.0 and really
should be called SSL_set0_rbio() and SSL_set0_wbio(). The old
implementation was not consistent with what "set0" means though as there
were special cases around what happens if the rbio and wbio are the same.
We were only ever taking one reference on the BIO, and checking everywhere
whether the rbio and wbio are the same so as not to double free.
A better approach is to rename the functions to SSL_set0_rbio() and
SSL_set0_wbio(). If an existing BIO is present it is *always* freed
regardless of whether the rbio and wbio are the same or not. It is
therefore the callers responsibility to ensure that a reference is taken
for *each* usage, i.e. one for the rbio and one for the wbio.
The legacy function SSL_set_bio() takes both the rbio and wbio in one go
and sets them both. We can wrap up the old behaviour in the implementation
of that function, i.e. previously if the rbio and wbio are the same in the
call to this function then the caller only needed to ensure one reference
was passed. This behaviour is retained by internally upping the ref count.
This commit was inspired by BoringSSL commit f715c423224.
RT#4572
Reviewed-by: Rich Salz <rsalz@openssl.org>
The BIO_pop implementation assumes that the rbio still equals the next BIO
in the chain. While this would normally be the case, it is possible that it
could have been changed directly by the application. It also does not
properly cater for the scenario where the buffering BIO is still in place
for the write BIO.
Most of the existing BIO_pop code for SSL BIOs can be replaced by a single
call to SSL_set_bio(). This is equivalent to the existing code but
additionally handles the scenario where the rbio has been changed or the
buffering BIO is still in place.
Reviewed-by: Rich Salz <rsalz@openssl.org>
When pushing a BIO onto an SSL BIO we set the rbio and wbio for the SSL
object to be the BIO that has been pushed. Therefore we need to up the ref
count for that BIO. The existing code was uping the ref count on the wrong
BIO.
Reviewed-by: Rich Salz <rsalz@openssl.org>
When setting the read bio we free up any old existing one. However this can
lead to a double free if the existing one is the same as the write bio.
Reviewed-by: Rich Salz <rsalz@openssl.org>
SSLv2 is no longer supported in 1.1.0, however we *do* still accept an SSLv2
style ClientHello, as long as we then subsequently negotiate a protocol
version >= SSLv3. The record format for SSLv2 style ClientHellos is quite
different to SSLv3+. We only accept this format in the first record of an
initial ClientHello. Previously we checked this by confirming
s->first_packet is set and s->server is true. However, this really only
tells us that we are dealing with an initial ClientHello, not that it is
the first record (s->first_packet is badly named...it really means this is
the first message). To check this is the first record of the initial
ClientHello we should also check that we've not received any data yet
(s->init_num == 0), and that we've not had any empty records.
GitHub Issue #1298
Reviewed-by: Emilia Käsper <emilia@openssl.org>
This is adapted from BoringSSL commit 2f87112b963.
This fixes a number of bugs where the existence of bbio was leaked in the
public API and broke things.
- SSL_get_wbio returned the bbio during the handshake. It must always return
the BIO the consumer configured. In doing so, some internal accesses of
SSL_get_wbio should be switched to ssl->wbio since those want to see bbio.
- The logic in SSL_set_rfd, etc. (which I doubt is quite right since
SSL_set_bio's lifetime is unclear) would get confused once wbio got
wrapped. Those want to compare to SSL_get_wbio.
- If SSL_set_bio was called mid-handshake, bbio would get disconnected and
lose state. It forgets to reattach the bbio afterwards. Unfortunately,
Conscrypt does this a lot. It just never ended up calling it at a point
where the bbio would cause problems.
- Make more explicit the invariant that any bbio's which exist are always
attached. Simplify a few things as part of that.
RT#4572
Reviewed-by: Richard Levitte <levitte@openssl.org>
Fix some indentation at the same time
Reviewed-by: Matt Caswell <matt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1292)
- Always process ALPN (previously there was an early return in the
certificate status handling)
- Don't send a duplicate alert. Previously, both
ssl_check_clienthello_tlsext_late and its caller would send an
alert. Consolidate alert sending code in the caller.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Continuing from the previous commit. Refactor tls_process_key_exchange() to
split out into a separate function the ECDHE aspects.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commit. Refactor tls_process_key_exchange() to
split out into a separate function the DHE aspects.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commit. Refactor tls_process_key_exchange() to
split out into a separate function the SRP aspects.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The tls_process_key_exchange() function is too long. This commit starts
the process of splitting it up by moving the PSK preamble code to a
separate function.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The function tls_process_key_exchange() is too long. This commit moves
the PSK preamble processing out to a separate function.
Reviewed-by: Richard Levitte <levitte@openssl.org>
If the SSL_SESS_CACHE_NO_INTERNAL_STORE cache mode is used then we weren't
removing sessions from the external cache, e.g. if an alert occurs the
session is supposed to be automatically removed.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Don't call strncpy with strlen of the source as the length. Don't call
strlen multiple times. Eventually we will want to replace this with a proper
PACKET style handling (but for construction of PACKETs instead of just
reading them as it is now). For now though this is safe because
PSK_MAX_IDENTITY_LEN will always fit into the destination buffer.
This addresses an OCAP Audit issue.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing previous commit to break up the
tls_construct_client_key_exchange() function. This splits out the SRP
code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing previous commit to break up the
tls_construct_client_key_exchange() function. This splits out the GOST
code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing previous commit to break up the
tls_construct_client_key_exchange() function. This splits out the ECDHE
code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing previous commit to break up the
tls_construct_client_key_exchange() function. This splits out the DHE
code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The tls_construct_client_key_exchange() function is too long. This splits
out the construction of the PSK pre-amble into a separate function as well
as the RSA construction.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commits, this splits out the GOST code into
a separate function from the process CKE code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commits, this splits out the ECDHE code into
a separate function from the process CKE code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
Continuing from the previous commit, this splits out the DHE code into
a separate function from the process CKE code.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The tls_process_client_key_exchange() function is far too long. This
splits out the PSK preamble processing, and the RSA processing into
separate functions.
Reviewed-by: Richard Levitte <levitte@openssl.org>
In preparation for splitting this function up into smaller functions this
commit reduces the scope of some of the variables to only be in scope for
the algorithm specific parts. In some cases that makes the error handling
more verbose than it needs to be - but we'll clean that up in a later
commit.
Reviewed-by: Richard Levitte <levitte@openssl.org>
The logic testing whether a CKE message is allowed or not was a little
difficult to follow. This tries to clean it up.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
The static function key_exchange_expected() used to return -1 on error.
Commit 361a119127 changed that so that it can never fail. This means that
some tidy up can be done to simplify error handling in callers of that
function.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Having received a ClientKeyExchange message instead of a Certificate we
know that we are not going to receive a CertificateVerify message. This
means we can free up the handshake_buffer. However we better call
ssl3_digest_cached_records() instead of just freeing it up, otherwise we
later try and use it anyway and a core dump results. This could happen,
for example, in SSLv3 where we send a CertificateRequest but the client
sends no Certificate message at all. This is valid in SSLv3 (in TLS
clients are required to send an empty Certificate message).
Found using the BoringSSL test suite.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
In TLS if the server sends a CertificateRequest and the client does not
provide one, if the server cannot continue it should send a
HandshakeFailure alert. In SSLv3 the same should happen, but instead we
were sending an UnexpectedMessage alert. This is incorrect - the message
isn't unexpected - it is valid for the client not to send one - its just
that we cannot continue without one.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
Move the preparation of the client certificate to be post processing work
after reading the CertificateRequest message rather than pre processing
work prior to writing the Certificate message. As part of preparing the
client certificate we may discover that we do not have one available. If
we are also talking SSLv3 then we won't send the Certificate message at
all. However, if we don't discover this until we are about to send the
Certificate message it is too late and we send an empty one anyway. This
is wrong for SSLv3.
Reviewed-by: Emilia Käsper <emilia@openssl.org>
The set0 setters take ownership of their arguments, so the values should
be set to NULL to avoid a double-free in the cleanup block should
ssl_security(SSL_SECOP_TMP_DH) fail. Found by BoringSSL's WeakDH test.
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1299)
In light of potential UKS (unknown key share) attacks on some
applications, primarily browsers, despite RFC761, name checks are
by default applied with DANE-EE(3) TLSA records. Applications for
which UKS is not a problem can optionally disable DANE-EE(3) name
checks via the new SSL_CTX_dane_set_flags() and friends.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Commit aea145e removed some error codes that are generated
algorithmically: mapping alerts to error texts. Found by
Andreas Karlsson. This restores them, and adds two missing ones.
Reviewed-by: Matt Caswell <matt@openssl.org>
The SSL_load_client_CA_file() failed to load any CAs due to an
inccorrect assumption about the return value of lh_*_insert(). The
return value when inserting into a hash is the old value of the key.
The bug was introduced in 3c82e437bb.
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1279)
We calculate the size required for the ServerKeyExchange message and then
call BUF_MEM_grow_clean() on the buffer. However we fail to take account of
2 bytes required for the signature algorithm and 2 bytes for the signature
length, i.e. we could overflow by 4 bytes. In reality this won't happen
because the buffer is pre-allocated to a large size that means it should be
big enough anyway.
Addresses an OCAP Audit issue.
Reviewed-by: Rich Salz <rsalz@openssl.org>
Reviewed-by: Andy Polyakov <appro@openssl.org>
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1264)
Reviewed-by: Andy Polyakov <appro@openssl.org>
Reviewed-by: Kurt Roeckx <kurt@openssl.org>
Reviewed-by: Rich Salz <rsalz@openssl.org>
(Merged from https://github.com/openssl/openssl/pull/1264)
In some situations (such as when we receive a fragment of an alert)
we try to get the next packet but did not mark the current one as read,
meaning that we got the same record back again - leading to an infinite
loop.
Found using the BoringSSL test suite.
Reviewed-by: Andy Polyakov <appro@openssl.org>