openssl/crypto/rand/rand_unix.c

818 lines
24 KiB
C
Raw Normal View History

/*
* Copyright 1995-2019 The OpenSSL Project Authors. All Rights Reserved.
*
* Licensed under the OpenSSL license (the "License"). You may not use
* this file except in compliance with the License. You can obtain a copy
* in the file LICENSE in the source distribution or at
* https://www.openssl.org/source/license.html
*/
#ifndef _GNU_SOURCE
# define _GNU_SOURCE
#endif
#include "e_os.h"
#include <stdio.h>
#include "internal/cryptlib.h"
#include <openssl/rand.h>
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
#include <openssl/crypto.h>
#include "rand_lcl.h"
#include "internal/rand_int.h"
#include <stdio.h>
#include "internal/dso.h"
#ifdef __linux
# include <sys/syscall.h>
# ifdef DEVRANDOM_WAIT
# include <sys/shm.h>
# include <sys/utsname.h>
# endif
#endif
#if defined(__FreeBSD__) && !defined(OPENSSL_SYS_UEFI)
# include <sys/types.h>
# include <sys/sysctl.h>
# include <sys/param.h>
#endif
#if defined(__OpenBSD__) || defined(__NetBSD__)
# include <sys/param.h>
#endif
#if defined(OPENSSL_SYS_UNIX) || defined(__DJGPP__)
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
# include <sys/types.h>
# include <sys/stat.h>
# include <fcntl.h>
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
# include <unistd.h>
# include <sys/time.h>
static uint64_t get_time_stamp(void);
static uint64_t get_timer_bits(void);
/* Macro to convert two thirty two bit values into a sixty four bit one */
# define TWO32TO64(a, b) ((((uint64_t)(a)) << 32) + (b))
/*
* Check for the existence and support of POSIX timers. The standard
* says that the _POSIX_TIMERS macro will have a positive value if they
* are available.
*
* However, we want an additional constraint: that the timer support does
* not require an extra library dependency. Early versions of glibc
* require -lrt to be specified on the link line to access the timers,
* so this needs to be checked for.
*
* It is worse because some libraries define __GLIBC__ but don't
* support the version testing macro (e.g. uClibc). This means
* an extra check is needed.
*
* The final condition is:
* "have posix timers and either not glibc or glibc without -lrt"
*
* The nested #if sequences are required to avoid using a parameterised
* macro that might be undefined.
*/
# undef OSSL_POSIX_TIMER_OKAY
# if defined(_POSIX_TIMERS) && _POSIX_TIMERS > 0
# if defined(__GLIBC__)
# if defined(__GLIBC_PREREQ)
# if __GLIBC_PREREQ(2, 17)
# define OSSL_POSIX_TIMER_OKAY
# endif
# endif
# else
# define OSSL_POSIX_TIMER_OKAY
# endif
# endif
#endif /* defined(OPENSSL_SYS_UNIX) || defined(__DJGPP__) */
#if defined(OPENSSL_RAND_SEED_NONE)
/* none means none. this simplifies the following logic */
# undef OPENSSL_RAND_SEED_OS
# undef OPENSSL_RAND_SEED_GETRANDOM
# undef OPENSSL_RAND_SEED_LIBRANDOM
# undef OPENSSL_RAND_SEED_DEVRANDOM
# undef OPENSSL_RAND_SEED_RDTSC
# undef OPENSSL_RAND_SEED_RDCPU
# undef OPENSSL_RAND_SEED_EGD
#endif
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
#if (defined(OPENSSL_SYS_VXWORKS) || defined(OPENSSL_SYS_UEFI)) && \
!defined(OPENSSL_RAND_SEED_NONE)
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
# error "UEFI and VXWorks only support seeding NONE"
#endif
#if defined(OPENSSL_SYS_VXWORKS)
/* empty implementation */
int rand_pool_init(void)
{
return 1;
}
void rand_pool_cleanup(void)
{
}
void rand_pool_keep_random_devices_open(int keep)
{
}
size_t rand_pool_acquire_entropy(RAND_POOL *pool)
{
return rand_pool_entropy_available(pool);
}
#endif
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
#if !(defined(OPENSSL_SYS_WINDOWS) || defined(OPENSSL_SYS_WIN32) \
|| defined(OPENSSL_SYS_VMS) || defined(OPENSSL_SYS_VXWORKS) \
|| defined(OPENSSL_SYS_UEFI))
# if defined(OPENSSL_SYS_VOS)
# ifndef OPENSSL_RAND_SEED_OS
# error "Unsupported seeding method configured; must be os"
# endif
# if defined(OPENSSL_SYS_VOS_HPPA) && defined(OPENSSL_SYS_VOS_IA32)
# error "Unsupported HP-PA and IA32 at the same time."
# endif
# if !defined(OPENSSL_SYS_VOS_HPPA) && !defined(OPENSSL_SYS_VOS_IA32)
# error "Must have one of HP-PA or IA32"
# endif
/*
* The following algorithm repeatedly samples the real-time clock (RTC) to
* generate a sequence of unpredictable data. The algorithm relies upon the
* uneven execution speed of the code (due to factors such as cache misses,
* interrupts, bus activity, and scheduling) and upon the rather large
* relative difference between the speed of the clock and the rate at which
* it can be read. If it is ported to an environment where execution speed
* is more constant or where the RTC ticks at a much slower rate, or the
* clock can be read with fewer instructions, it is likely that the results
* would be far more predictable. This should only be used for legacy
* platforms.
*
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
* As a precaution, we assume only 2 bits of entropy per byte.
*/
size_t rand_pool_acquire_entropy(RAND_POOL *pool)
2009-04-07 16:33:26 +00:00
{
short int code;
int i, k;
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
size_t bytes_needed;
struct timespec ts;
unsigned char v;
# ifdef OPENSSL_SYS_VOS_HPPA
long duration;
extern void s$sleep(long *_duration, short int *_code);
# else
long long duration;
extern void s$sleep2(long long *_duration, short int *_code);
# endif
bytes_needed = rand_pool_bytes_needed(pool, 4 /*entropy_factor*/);
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
for (i = 0; i < bytes_needed; i++) {
/*
* burn some cpu; hope for interrupts, cache collisions, bus
* interference, etc.
*/
for (k = 0; k < 99; k++)
ts.tv_nsec = random();
# ifdef OPENSSL_SYS_VOS_HPPA
/* sleep for 1/1024 of a second (976 us). */
duration = 1;
s$sleep(&duration, &code);
# else
/* sleep for 1/65536 of a second (15 us). */
duration = 1;
s$sleep2(&duration, &code);
# endif
/* Get wall clock time, take 8 bits. */
clock_gettime(CLOCK_REALTIME, &ts);
v = (unsigned char)(ts.tv_nsec & 0xFF);
rand_pool_add(pool, arg, &v, sizeof(v) , 2);
}
return rand_pool_entropy_available(pool);
2009-04-07 16:33:26 +00:00
}
void rand_pool_cleanup(void)
{
}
void rand_pool_keep_random_devices_open(int keep)
{
}
# else
# if defined(OPENSSL_RAND_SEED_EGD) && \
(defined(OPENSSL_NO_EGD) || !defined(DEVRANDOM_EGD))
# error "Seeding uses EGD but EGD is turned off or no device given"
# endif
# if defined(OPENSSL_RAND_SEED_DEVRANDOM) && !defined(DEVRANDOM)
# error "Seeding uses urandom but DEVRANDOM is not configured"
# endif
# if defined(OPENSSL_RAND_SEED_OS)
# if !defined(DEVRANDOM)
# error "OS seeding requires DEVRANDOM to be configured"
# endif
# define OPENSSL_RAND_SEED_GETRANDOM
# define OPENSSL_RAND_SEED_DEVRANDOM
# endif
# if defined(OPENSSL_RAND_SEED_LIBRANDOM)
# error "librandom not (yet) supported"
# endif
# if (defined(__FreeBSD__) || defined(__NetBSD__)) && defined(KERN_ARND)
/*
* sysctl_random(): Use sysctl() to read a random number from the kernel
* Returns the number of bytes returned in buf on success, -1 on failure.
*/
static ssize_t sysctl_random(char *buf, size_t buflen)
{
int mib[2];
size_t done = 0;
size_t len;
/*
* Note: sign conversion between size_t and ssize_t is safe even
* without a range check, see comment in syscall_random()
*/
/*
* On FreeBSD old implementations returned longs, newer versions support
* variable sizes up to 256 byte. The code below would not work properly
* when the sysctl returns long and we want to request something not a
* multiple of longs, which should never be the case.
*/
if (!ossl_assert(buflen % sizeof(long) == 0)) {
errno = EINVAL;
return -1;
}
/*
* On NetBSD before 4.0 KERN_ARND was an alias for KERN_URND, and only
* filled in an int, leaving the rest uninitialized. Since NetBSD 4.0
* it returns a variable number of bytes with the current version supporting
* up to 256 bytes.
* Just return an error on older NetBSD versions.
*/
#if defined(__NetBSD__) && __NetBSD_Version__ < 400000000
errno = ENOSYS;
return -1;
#endif
mib[0] = CTL_KERN;
mib[1] = KERN_ARND;
do {
len = buflen;
if (sysctl(mib, 2, buf, &len, NULL, 0) == -1)
return done > 0 ? done : -1;
done += len;
buf += len;
buflen -= len;
} while (buflen > 0);
return done;
}
# endif
# if defined(OPENSSL_RAND_SEED_GETRANDOM)
# if defined(__linux) && !defined(__NR_getrandom)
# if defined(__arm__) && defined(__NR_SYSCALL_BASE)
# define __NR_getrandom (__NR_SYSCALL_BASE+384)
# elif defined(__i386__)
# define __NR_getrandom 355
# elif defined(__x86_64__) && !defined(__ILP32__)
# define __NR_getrandom 318
# endif
# endif
/*
* syscall_random(): Try to get random data using a system call
* returns the number of bytes returned in buf, or < 0 on error.
*/
static ssize_t syscall_random(void *buf, size_t buflen)
{
/*
* Note: 'buflen' equals the size of the buffer which is used by the
* get_entropy() callback of the RAND_DRBG. It is roughly bounded by
*
DRBG: fix reseeding via RAND_add()/RAND_seed() with large input In pull request #4328 the seeding of the DRBG via RAND_add()/RAND_seed() was implemented by buffering the data in a random pool where it is picked up later by the rand_drbg_get_entropy() callback. This buffer was limited to the size of 4096 bytes. When a larger input was added via RAND_add() or RAND_seed() to the DRBG, the reseeding failed, but the error returned by the DRBG was ignored by the two calling functions, which both don't return an error code. As a consequence, the data provided by the application was effectively ignored. This commit fixes the problem by a more efficient implementation which does not copy the data in memory and by raising the buffer the size limit to INT32_MAX (2 gigabytes). This is less than the NIST limit of 2^35 bits but it was chosen intentionally to avoid platform dependent problems like integer sizes and/or signed/unsigned conversion. Additionally, the DRBG is now less permissive on errors: In addition to pushing a message to the openssl error stack, it enters the error state, which forces a reinstantiation on next call. Thanks go to Dr. Falko Strenzke for reporting this issue to the openssl-security mailing list. After internal discussion the issue has been categorized as not being security relevant, because the DRBG reseeds automatically and is fully functional even without additional randomness provided by the application. Fixes #7381 Reviewed-by: Paul Dale <paul.dale@oracle.com> (Merged from https://github.com/openssl/openssl/pull/7382) (cherry picked from commit 3064b55134434a0b2850f07eff57120f35bb269a)
2018-10-09 23:53:29 +00:00
* 2 * RAND_POOL_FACTOR * (RAND_DRBG_STRENGTH / 8) = 2^14
*
* which is way below the OSSL_SSIZE_MAX limit. Therefore sign conversion
* between size_t and ssize_t is safe even without a range check.
*/
/*
* Do runtime detection to find getentropy().
*
* Known OSs that should support this:
* - Darwin since 16 (OSX 10.12, IOS 10.0).
* - Solaris since 11.3
* - OpenBSD since 5.6
* - Linux since 3.17 with glibc 2.25
* - FreeBSD since 12.0 (1200061)
*/
# if defined(__GNUC__) && __GNUC__>=2 && defined(__ELF__) && !defined(__hpux)
extern int getentropy(void *buffer, size_t length) __attribute__((weak));
if (getentropy != NULL)
return getentropy(buf, buflen) == 0 ? (ssize_t)buflen : -1;
# else
union {
void *p;
int (*f)(void *buffer, size_t length);
} p_getentropy;
/*
* We could cache the result of the lookup, but we normally don't
* call this function often.
*/
ERR_set_mark();
p_getentropy.p = DSO_global_lookup("getentropy");
ERR_pop_to_mark();
if (p_getentropy.p != NULL)
return p_getentropy.f(buf, buflen) == 0 ? (ssize_t)buflen : -1;
# endif
/* Linux supports this since version 3.17 */
# if defined(__linux) && defined(__NR_getrandom)
return syscall(__NR_getrandom, buf, buflen, 0);
# elif (defined(__FreeBSD__) || defined(__NetBSD__)) && defined(KERN_ARND)
return sysctl_random(buf, buflen);
# else
errno = ENOSYS;
return -1;
# endif
}
# endif /* defined(OPENSSL_RAND_SEED_GETRANDOM) */
# if defined(OPENSSL_RAND_SEED_DEVRANDOM)
static const char *random_device_paths[] = { DEVRANDOM };
static struct random_device {
int fd;
dev_t dev;
ino_t ino;
mode_t mode;
dev_t rdev;
} random_devices[OSSL_NELEM(random_device_paths)];
static int keep_random_devices_open = 1;
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
# if defined(__linux) && defined(DEVRANDOM_WAIT)
static void *shm_addr;
static void cleanup_shm(void)
{
shmdt(shm_addr);
}
/*
* Ensure that the system randomness source has been adequately seeded.
* This is done by having the first start of libcrypto, wait until the device
* /dev/random becomes able to supply a byte of entropy. Subsequent starts
* of the library and later reseedings do not need to do this.
*/
static int wait_random_seeded(void)
{
static int seeded = OPENSSL_RAND_SEED_DEVRANDOM_SHM_ID < 0;
static const int kernel_version[] = { DEVRANDOM_SAFE_KERNEL };
int kernel[2];
int shm_id, fd, r;
char c, *p;
struct utsname un;
fd_set fds;
if (!seeded) {
/* See if anything has created the global seeded indication */
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
if ((shm_id = shmget(OPENSSL_RAND_SEED_DEVRANDOM_SHM_ID, 1, 0)) == -1) {
/*
* Check the kernel's version and fail if it is too recent.
*
* Linux kernels from 4.8 onwards do not guarantee that
* /dev/urandom is properly seeded when /dev/random becomes
* readable. However, such kernels support the getentropy(2)
* system call and this should always succeed which renders
* this alternative but essentially identical source moot.
*/
if (uname(&un) == 0) {
kernel[0] = atoi(un.release);
p = strchr(un.release, '.');
kernel[1] = p == NULL ? 0 : atoi(p + 1);
if (kernel[0] > kernel_version[0]
|| (kernel[0] == kernel_version[0]
&& kernel[1] >= kernel_version[1])) {
return 0;
}
}
/* Open /dev/random and wait for it to be readable */
if ((fd = open(DEVRANDOM_WAIT, O_RDONLY)) != -1) {
if (DEVRANDM_WAIT_USE_SELECT && fd < FD_SETSIZE) {
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
FD_ZERO(&fds);
FD_SET(fd, &fds);
while ((r = select(fd + 1, &fds, NULL, NULL, NULL)) < 0
&& errno == EINTR);
} else {
while ((r = read(fd, &c, 1)) < 0 && errno == EINTR);
}
close(fd);
if (r == 1) {
seeded = 1;
/* Create the shared memory indicator */
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
shm_id = shmget(OPENSSL_RAND_SEED_DEVRANDOM_SHM_ID, 1,
IPC_CREAT | S_IRUSR | S_IRGRP | S_IROTH);
}
}
}
if (shm_id != -1) {
seeded = 1;
/*
* Map the shared memory to prevent its premature destruction.
* If this call fails, it isn't a big problem.
*/
shm_addr = shmat(shm_id, NULL, SHM_RDONLY);
if (shm_addr != (void *)-1)
OPENSSL_atexit(&cleanup_shm);
}
}
return seeded;
}
# else /* defined __linux */
static int wait_random_seeded(void)
{
return 1;
}
# endif
/*
* Verify that the file descriptor associated with the random source is
* still valid. The rationale for doing this is the fact that it is not
* uncommon for daemons to close all open file handles when daemonizing.
* So the handle might have been closed or even reused for opening
* another file.
*/
static int check_random_device(struct random_device * rd)
{
struct stat st;
return rd->fd != -1
&& fstat(rd->fd, &st) != -1
&& rd->dev == st.st_dev
&& rd->ino == st.st_ino
&& ((rd->mode ^ st.st_mode) & ~(S_IRWXU | S_IRWXG | S_IRWXO)) == 0
&& rd->rdev == st.st_rdev;
}
/*
* Open a random device if required and return its file descriptor or -1 on error
*/
static int get_random_device(size_t n)
{
struct stat st;
struct random_device * rd = &random_devices[n];
/* reuse existing file descriptor if it is (still) valid */
if (check_random_device(rd))
return rd->fd;
/* open the random device ... */
if ((rd->fd = open(random_device_paths[n], O_RDONLY)) == -1)
return rd->fd;
/* ... and cache its relevant stat(2) data */
if (fstat(rd->fd, &st) != -1) {
rd->dev = st.st_dev;
rd->ino = st.st_ino;
rd->mode = st.st_mode;
rd->rdev = st.st_rdev;
} else {
close(rd->fd);
rd->fd = -1;
}
return rd->fd;
}
/*
* Close a random device making sure it is a random device
*/
static void close_random_device(size_t n)
{
struct random_device * rd = &random_devices[n];
if (check_random_device(rd))
close(rd->fd);
rd->fd = -1;
}
int rand_pool_init(void)
{
size_t i;
for (i = 0; i < OSSL_NELEM(random_devices); i++)
random_devices[i].fd = -1;
rand_unix.c: open random devices on first use only Commit c7504aeb640a (pr #6432) fixed a regression for applications in chroot environments, which compensated the fact that the new OpenSSL CSPRNG (based on the NIST DRBG) now reseeds periodically, which the previous one didn't. Now the reseeding could fail in the chroot environment if the DEVRANDOM devices were not present anymore and no other entropy source (e.g. getrandom()) was available. The solution was to keep the file handles for the DEVRANDOM devices open by default. In fact, the fix did more than this, it opened the DEVRANDOM devices early and unconditionally in rand_pool_init(), which had the unwanted side effect that the devices were opened (and kept open) even in cases when they were not used at all, for example when the getrandom() system call was available. Due to a bug (issue #7419) this even happened when the feature was disabled by the application. This commit removes the unconditional opening of all DEVRANDOM devices. They will now only be opened (and kept open) on first use. In particular, if getrandom() is available, the handles will not be opened unnecessarily. This change does not introduce a regression for applications compiled for libcrypto 1.1.0, because the SSLEAY RNG also seeds on first use. So in the above constellation the CSPRNG will only be properly seeded if it is happens before the forking and chrooting. Fixes #7419 Reviewed-by: Paul Dale <paul.dale@oracle.com> (Merged from https://github.com/openssl/openssl/pull/7437) (cherry picked from commit 8cfc19716c22dac737ec8cfc5f7d085e7c37f4d8)
2018-10-18 11:27:14 +00:00
return 1;
}
void rand_pool_cleanup(void)
{
size_t i;
for (i = 0; i < OSSL_NELEM(random_devices); i++)
close_random_device(i);
}
void rand_pool_keep_random_devices_open(int keep)
{
rand_unix.c: open random devices on first use only Commit c7504aeb640a (pr #6432) fixed a regression for applications in chroot environments, which compensated the fact that the new OpenSSL CSPRNG (based on the NIST DRBG) now reseeds periodically, which the previous one didn't. Now the reseeding could fail in the chroot environment if the DEVRANDOM devices were not present anymore and no other entropy source (e.g. getrandom()) was available. The solution was to keep the file handles for the DEVRANDOM devices open by default. In fact, the fix did more than this, it opened the DEVRANDOM devices early and unconditionally in rand_pool_init(), which had the unwanted side effect that the devices were opened (and kept open) even in cases when they were not used at all, for example when the getrandom() system call was available. Due to a bug (issue #7419) this even happened when the feature was disabled by the application. This commit removes the unconditional opening of all DEVRANDOM devices. They will now only be opened (and kept open) on first use. In particular, if getrandom() is available, the handles will not be opened unnecessarily. This change does not introduce a regression for applications compiled for libcrypto 1.1.0, because the SSLEAY RNG also seeds on first use. So in the above constellation the CSPRNG will only be properly seeded if it is happens before the forking and chrooting. Fixes #7419 Reviewed-by: Paul Dale <paul.dale@oracle.com> (Merged from https://github.com/openssl/openssl/pull/7437) (cherry picked from commit 8cfc19716c22dac737ec8cfc5f7d085e7c37f4d8)
2018-10-18 11:27:14 +00:00
if (!keep)
rand_pool_cleanup();
rand_unix.c: open random devices on first use only Commit c7504aeb640a (pr #6432) fixed a regression for applications in chroot environments, which compensated the fact that the new OpenSSL CSPRNG (based on the NIST DRBG) now reseeds periodically, which the previous one didn't. Now the reseeding could fail in the chroot environment if the DEVRANDOM devices were not present anymore and no other entropy source (e.g. getrandom()) was available. The solution was to keep the file handles for the DEVRANDOM devices open by default. In fact, the fix did more than this, it opened the DEVRANDOM devices early and unconditionally in rand_pool_init(), which had the unwanted side effect that the devices were opened (and kept open) even in cases when they were not used at all, for example when the getrandom() system call was available. Due to a bug (issue #7419) this even happened when the feature was disabled by the application. This commit removes the unconditional opening of all DEVRANDOM devices. They will now only be opened (and kept open) on first use. In particular, if getrandom() is available, the handles will not be opened unnecessarily. This change does not introduce a regression for applications compiled for libcrypto 1.1.0, because the SSLEAY RNG also seeds on first use. So in the above constellation the CSPRNG will only be properly seeded if it is happens before the forking and chrooting. Fixes #7419 Reviewed-by: Paul Dale <paul.dale@oracle.com> (Merged from https://github.com/openssl/openssl/pull/7437) (cherry picked from commit 8cfc19716c22dac737ec8cfc5f7d085e7c37f4d8)
2018-10-18 11:27:14 +00:00
keep_random_devices_open = keep;
}
# else /* !defined(OPENSSL_RAND_SEED_DEVRANDOM) */
int rand_pool_init(void)
{
return 1;
}
void rand_pool_cleanup(void)
{
}
void rand_pool_keep_random_devices_open(int keep)
{
}
# endif /* defined(OPENSSL_RAND_SEED_DEVRANDOM) */
/*
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
* Try the various seeding methods in turn, exit when successful.
*
* TODO(DRBG): If more than one entropy source is available, is it
* preferable to stop as soon as enough entropy has been collected
* (as favored by @rsalz) or should one rather be defensive and add
* more entropy than requested and/or from different sources?
*
* Currently, the user can select multiple entropy sources in the
* configure step, yet in practice only the first available source
* will be used. A more flexible solution has been requested, but
* currently it is not clear how this can be achieved without
* overengineering the problem. There are many parameters which
* could be taken into account when selecting the order and amount
* of input from the different entropy sources (trust, quality,
* possibility of blocking).
*/
size_t rand_pool_acquire_entropy(RAND_POOL *pool)
{
# if defined(OPENSSL_RAND_SEED_NONE)
return rand_pool_entropy_available(pool);
# else
size_t entropy_available;
# if defined(OPENSSL_RAND_SEED_GETRANDOM)
{
size_t bytes_needed;
unsigned char *buffer;
ssize_t bytes;
/* Maximum allowed number of consecutive unsuccessful attempts */
int attempts = 3;
bytes_needed = rand_pool_bytes_needed(pool, 1 /*entropy_factor*/);
while (bytes_needed != 0 && attempts-- > 0) {
buffer = rand_pool_add_begin(pool, bytes_needed);
bytes = syscall_random(buffer, bytes_needed);
if (bytes > 0) {
rand_pool_add_end(pool, bytes, 8 * bytes);
bytes_needed -= bytes;
attempts = 3; /* reset counter after successful attempt */
} else if (bytes < 0 && errno != EINTR) {
break;
}
}
}
entropy_available = rand_pool_entropy_available(pool);
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
if (entropy_available > 0)
return entropy_available;
# endif
# if defined(OPENSSL_RAND_SEED_LIBRANDOM)
{
/* Not yet implemented. */
}
# endif
# if defined(OPENSSL_RAND_SEED_DEVRANDOM)
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
if (wait_random_seeded()) {
size_t bytes_needed;
unsigned char *buffer;
size_t i;
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
bytes_needed = rand_pool_bytes_needed(pool, 1 /*entropy_factor*/);
for (i = 0; bytes_needed > 0 && i < OSSL_NELEM(random_device_paths);
i++) {
ssize_t bytes = 0;
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
/* Maximum number of consecutive unsuccessful attempts */
int attempts = 3;
const int fd = get_random_device(i);
if (fd == -1)
continue;
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
while (bytes_needed != 0 && attempts-- > 0) {
buffer = rand_pool_add_begin(pool, bytes_needed);
bytes = read(fd, buffer, bytes_needed);
if (bytes > 0) {
rand_pool_add_end(pool, bytes, 8 * bytes);
bytes_needed -= bytes;
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
attempts = 3; /* reset counter on successful attempt */
} else if (bytes < 0 && errno != EINTR) {
break;
}
}
if (bytes < 0 || !keep_random_devices_open)
close_random_device(i);
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
Start up DEVRANDOM entropy improvement for older Linux devices. Improve handling of low entropy at start up from /dev/urandom by waiting for a read(2) call on /dev/random to succeed. Once one such call has succeeded, a shared memory segment is created and persisted as an indicator to other processes that /dev/urandom is properly seeded. This does not fully prevent against attacks weakening the entropy source. An attacker who has control of the machine early in its boot sequence could create the shared memory segment preventing detection of low entropy conditions. However, this is no worse than the current situation. An attacker would also be capable of removing the shared memory segment and causing seeding to reoccur resulting in a denial of service attack. This is partially mitigated by keeping the shared memory alive for the duration of the process's existence. Thus, an attacker would not only need to have called call shmctl(2) with the IPC_RMID command but the system must subsequently enter a state where no instances of libcrypto exist in any process. Even one long running process will prevent this attack. The System V shared memory calls used here go back at least as far as Linux kernel 2.0. Linux kernels 4.8 and later, don't have a reliable way to detect that /dev/urandom has been properly seeded, so a failure is raised for this case (i.e. the getentropy(2) call has already failed). Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de> (Merged from https://github.com/openssl/openssl/pull/9595) [manual merge]
2019-08-20 06:19:20 +00:00
bytes_needed = rand_pool_bytes_needed(pool, 1);
}
entropy_available = rand_pool_entropy_available(pool);
if (entropy_available > 0)
return entropy_available;
}
# endif
# if defined(OPENSSL_RAND_SEED_RDTSC)
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
entropy_available = rand_acquire_entropy_from_tsc(pool);
if (entropy_available > 0)
return entropy_available;
# endif
# if defined(OPENSSL_RAND_SEED_RDCPU)
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
entropy_available = rand_acquire_entropy_from_cpu(pool);
if (entropy_available > 0)
return entropy_available;
# endif
# if defined(OPENSSL_RAND_SEED_EGD)
{
static const char *paths[] = { DEVRANDOM_EGD, NULL };
size_t bytes_needed;
unsigned char *buffer;
int i;
bytes_needed = rand_pool_bytes_needed(pool, 1 /*entropy_factor*/);
for (i = 0; bytes_needed > 0 && paths[i] != NULL; i++) {
size_t bytes = 0;
int num;
buffer = rand_pool_add_begin(pool, bytes_needed);
num = RAND_query_egd_bytes(paths[i],
buffer, (int)bytes_needed);
if (num == (int)bytes_needed)
bytes = bytes_needed;
Fix reseeding issues of the public RAND_DRBG Reseeding is handled very differently by the classic RAND_METHOD API and the new RAND_DRBG api. These differences led to some problems when the new RAND_DRBG was made the default OpenSSL RNG. In particular, RAND_add() did not work as expected anymore. These issues are discussed on the thread '[openssl-dev] Plea for a new public OpenSSL RNG API' and in Pull Request #4328. This commit fixes the mentioned issues, introducing the following changes: - Replace the fixed size RAND_BYTES_BUFFER by a new RAND_POOL API which facilitates collecting entropy by the get_entropy() callback. - Don't use RAND_poll()/RAND_add() for collecting entropy from the get_entropy() callback anymore. Instead, replace RAND_poll() by RAND_POOL_acquire_entropy(). - Add a new function rand_drbg_restart() which tries to get the DRBG in an instantiated state by all means, regardless of the current state (uninstantiated, error, ...) the DRBG is in. If the caller provides entropy or additional input, it will be used for reseeding. - Restore the original documented behaviour of RAND_add() and RAND_poll() (namely to reseed the DRBG immediately) by a new implementation based on rand_drbg_restart(). - Add automatic error recovery from temporary failures of the entropy source to RAND_DRBG_generate() using the rand_drbg_restart() function. Reviewed-by: Paul Dale <paul.dale@oracle.com> Reviewed-by: Kurt Roeckx <kurt@roeckx.be> Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Ben Kaduk <kaduk@mit.edu> (Merged from https://github.com/openssl/openssl/pull/4328)
2017-08-31 21:16:22 +00:00
rand_pool_add_end(pool, bytes, 8 * bytes);
bytes_needed = rand_pool_bytes_needed(pool, 1);
}
entropy_available = rand_pool_entropy_available(pool);
if (entropy_available > 0)
return entropy_available;
}
# endif
return rand_pool_entropy_available(pool);
# endif
}
# endif
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
#endif
#if defined(OPENSSL_SYS_UNIX) || defined(__DJGPP__)
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
int rand_pool_add_nonce_data(RAND_POOL *pool)
{
struct {
pid_t pid;
CRYPTO_THREAD_ID tid;
uint64_t time;
} data = { 0 };
/*
* Add process id, thread id, and a high resolution timestamp to
* ensure that the nonce is unique with high probability for
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
* different process instances.
*/
data.pid = getpid();
data.tid = CRYPTO_THREAD_get_current_id();
data.time = get_time_stamp();
return rand_pool_add(pool, (unsigned char *)&data, sizeof(data), 0);
}
int rand_pool_add_additional_data(RAND_POOL *pool)
{
struct {
int fork_id;
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
CRYPTO_THREAD_ID tid;
uint64_t time;
} data = { 0 };
/*
* Add some noise from the thread id and a high resolution timer.
* The fork_id adds some extra fork-safety.
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
* The thread id adds a little randomness if the drbg is accessed
* concurrently (which is the case for the <master> drbg).
*/
data.fork_id = openssl_get_fork_id();
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
data.tid = CRYPTO_THREAD_get_current_id();
data.time = get_timer_bits();
return rand_pool_add(pool, (unsigned char *)&data, sizeof(data), 0);
}
/*
* Get the current time with the highest possible resolution
*
* The time stamp is added to the nonce, so it is optimized for not repeating.
* The current time is ideal for this purpose, provided the computer's clock
* is synchronized.
*/
static uint64_t get_time_stamp(void)
{
# if defined(OSSL_POSIX_TIMER_OKAY)
{
struct timespec ts;
if (clock_gettime(CLOCK_REALTIME, &ts) == 0)
return TWO32TO64(ts.tv_sec, ts.tv_nsec);
}
# endif
# if defined(__unix__) \
|| (defined(_POSIX_C_SOURCE) && _POSIX_C_SOURCE >= 200112L)
{
struct timeval tv;
if (gettimeofday(&tv, NULL) == 0)
return TWO32TO64(tv.tv_sec, tv.tv_usec);
}
# endif
return time(NULL);
}
/*
* Get an arbitrary timer value of the highest possible resolution
*
* The timer value is added as random noise to the additional data,
* which is not considered a trusted entropy sourec, so any result
* is acceptable.
*/
static uint64_t get_timer_bits(void)
{
uint64_t res = OPENSSL_rdtsc();
if (res != 0)
return res;
# if defined(__sun) || defined(__hpux)
return gethrtime();
# elif defined(_AIX)
{
timebasestruct_t t;
read_wall_time(&t, TIMEBASE_SZ);
return TWO32TO64(t.tb_high, t.tb_low);
}
# elif defined(OSSL_POSIX_TIMER_OKAY)
{
struct timespec ts;
# ifdef CLOCK_BOOTTIME
# define CLOCK_TYPE CLOCK_BOOTTIME
# elif defined(_POSIX_MONOTONIC_CLOCK)
# define CLOCK_TYPE CLOCK_MONOTONIC
# else
# define CLOCK_TYPE CLOCK_REALTIME
# endif
if (clock_gettime(CLOCK_TYPE, &ts) == 0)
return TWO32TO64(ts.tv_sec, ts.tv_nsec);
}
# endif
# if defined(__unix__) \
|| (defined(_POSIX_C_SOURCE) && _POSIX_C_SOURCE >= 200112L)
{
struct timeval tv;
DRBG: implement a get_nonce() callback Fixes #5849 In pull request #5503 a fallback was added which adds a random nonce of security_strength/2 bits if no nonce callback is provided. This change raised the entropy requirements form 256 to 384 bit, which can cause problems on some platforms (e.g. VMS, see issue #5849). The requirements for the nonce are given in section 8.6.7 of NIST SP 800-90Ar1: A nonce may be required in the construction of a seed during instantiation in order to provide a security cushion to block certain attacks. The nonce shall be either: a) A value with at least (security_strength/2) bits of entropy, or b) A value that is expected to repeat no more often than a (security_strength/2)-bit random string would be expected to repeat. Each nonce shall be unique to the cryptographic module in which instantiation is performed, but need not be secret. When used, the nonce shall be considered to be a critical security parameter. This commit implements a nonce of type b) in order to lower the entropy requirements during instantiation back to 256 bits. The formulation "shall be unique to the cryptographic module" above implies that the nonce needs to be unique among (with high probability) among all DRBG instances in "space" and "time". We try to achieve this goal by creating a nonce of the following form nonce = app-specific-data || high-resolution-utc-timestamp || counter Where || denotes concatenation. The application specific data can be something like the process or group id of the application. A utc timestamp is used because it increases monotonically, provided the system time is synchronized. This approach may not be perfect yet for a FIPS evaluation, but it should be good enough for the moment. This commit also harmonizes the implementation of the get_nonce() and the get_additional_data() callbacks and moves the platform specific parts from rand_lib.c into rand_unix.c, rand_win.c, and rand_vms.c. Reviewed-by: Richard Levitte <levitte@openssl.org> (Merged from https://github.com/openssl/openssl/pull/5920)
2018-04-10 08:22:52 +00:00
if (gettimeofday(&tv, NULL) == 0)
return TWO32TO64(tv.tv_sec, tv.tv_usec);
}
# endif
return time(NULL);
}
#endif /* defined(OPENSSL_SYS_UNIX) || defined(__DJGPP__) */