Home | History | Annotate | Download | only in x86
History log of /src/sys/arch/x86/x86/cpu_rng.c
RevisionDateAuthorComments
 1.23  01-Aug-2024  riastradh x86/cpu_rng.c: Archive more links.

Why do major hardware manufacturers consistently seem to think links
should just stop working after a year or two?

No functional chang intended, only comments.
 1.22  31-Jul-2024  riastradh x86/cpu_rng.c: Add reference for Intel's hardware design.

Not normative, unverifiable, possibly outdated -- but still a useful
description of a model of what Intel might have implemented under the
hood of RDRAND/RDSEED.

No functional change.
 1.21  09-Jun-2024  riastradh x86/cpu_rng: Fix false alarm rate of CPU RNG health test.

Lower it from 1/2^32 (about one in four billion) to 1/2^256
(approximately not gonna happen squared).

PR port-amd64/58122
 1.20  07-Oct-2021  msaitoh branches: 1.20.4;
KNF. No functional change.
 1.19  30-Jul-2020  riastradh Cite Cryptography Research evaluation of VIA RNG and give live URL.

(URL verified to be archived in the Internet Archive for posterity)
 1.18  25-Jul-2020  riastradh Tweak VIA CPU RNG.

- Cite source for documentation.
- Omit needless kpreempt_disable/enable.
- Explain what's going on.
- Use "D"(out) rather than "+D"(out) -- no REP so no register update.
- Fix interpretation of number of bytes returned.

The last one is likely to address

[ 4.0518619] aes: VIA ACE
....
[ 11.7018582] cpu_rng via: failed repetition test
[ 12.4718583] entropy: ready

reported by Andrius V.
 1.17  15-Jun-2020  riastradh Count down bits of entropy, not bits of data, in x86 cpu_rng.

Fixes logic in this loop for XSTORERNG on VIA CPUs, which are deemed
to have half the entropy per bit of data as RDSEED on Intel CPUs, so
that it gathers enough entropy on the first request, not on the
second request.
 1.16  15-Jun-2020  riastradh Use x86_read_psl/x86_disable_intr/x86_read_psl to defer interrupts.

Using x86_disable_intr/x86_enable_intr causes a bit of a snag when we
try it early at boot before we're ready to handle interrupts, because
it has the effect of enabling interrupts!

Fixes instant reset at boot on VIA CPUs. The instant reset on boot
is new since the entropy rework, which initialized the x86 CPU RNG
earlier than before, but in principle this could also cause other
problems while not early at boot too.

XXX pullup
 1.15  05-Jun-2020  kamil Change const unsigned to preprocessor define

Fixes GCC -O0 build with the stack protector.
 1.14  10-May-2020  maxv Reintroduce cpu_rng_early_sample(), but this time with embedded detection
for RDRAND/RDSEED, because TSC is not very strong.
 1.13  30-Apr-2020  riastradh rnd_attach_source calls the callback itself now.

No need for every driver to explicitly call it to prime the pool.

Eliminate now-unused <sys/rndpool.h>.
 1.12  30-Apr-2020  riastradh Omit needless #include <sys/rnd.h>.
 1.11  30-Apr-2020  riastradh Simplify Intel RDRAND/RDSEED and VIA C3 RNG API.

Push it all into MD x86 code to keep it simpler, until we have other
examples on other CPUs. Simplify RDSEED-to-RDRAND fallback.
Eliminate cpu_earlyrng in favour of just using entropy_extract, which
is available early now.
 1.10  01-Nov-2019  taca Check CPU support of RDRAND before calling cpu_rng_rdrand().

cpu_earlyrng() checks CPU support of RDSEED and RDRAND before calling
cpu_rng_rdseed() and cpu_rng_rdrand().

But cpu_rng_rdseed() did not check CPU support of RDRAND and system had
crashed on such an environment. There is no such case with real CPU but
some VM environment.

Fix kern/54655 and confirmed by msaitoh@.

Needs pullup to netbsd-9.
 1.9  22-Aug-2018  maxv branches: 1.9.4;
Add support for monitoring the stack with kASan. This allows us to detect
illegal memory accesses occuring there.

The compiler inlines a piece of code in each function that adds redzones
around the local variables and poisons them. The illegal accesses are then
detected using the usual kASan machinery.

The stack size is doubled, from 4 pages to 8 pages.

Several boot functions are marked with the __noasan flag, to prevent the
compiler from adding redzones in them (because we haven't yet initialized
kASan). The kasan_early_init function is called early at boot time to
quickly create the shadow for the current stack; after this is done, we
don't need __noasan anymore in the boot path.

We pass -fasan-shadow-offset=0xDFFF900000000000, because the compiler
wants to do
shad = shadow-offset + (addr >> 3)
and we do, in kasan_addr_to_shad
shad = KASAN_SHADOW_START + ((addr - CANONICAL_BASE) >> 3)
hence
shad = KASAN_SHADOW_START + (addr >> 3) - (CANONICAL_BASE >> 3)
= [KASAN_SHADOW_START - (CANONICAL_BASE >> 3)] + (addr >> 3)
implies
shadow-offset = KASAN_SHADOW_START - (CANONICAL_BASE >> 3)
= 0xFFFF800000000000 - (0xFFFF800000000000 >> 3)
= 0xDFFF900000000000

In UVM, we add a kasan_free (that is not preceded by a kasan_alloc). We
don't add poisoned redzones ourselves, but all the functions we execute
do, so we need to manually clear the poison before freeing the stack.

With the help of Kamil for the makefile stuff.
 1.8  21-Jul-2018  maxv Forgot to commit a change in i386/cpufunc.S; add rdtsc(), so that it can be
used in cpu_rng. Restore the cpu_rng code back to how it was in my initial
commit.
 1.7  21-Jul-2018  kre Unbreak build. Fake out (ie: remove) rdtsc() which does not
exist on XEN (or not yet anyway).

This change needs to be reverted when a proper solution ic implemented.
 1.6  21-Jul-2018  maxv More ASLR. Randomize the location of the direct map at boot time on amd64.
This doesn't need "options KASLR" and works on GENERIC. Will soon be
enabled by default.

The location of the areas is abstracted in a slotspace structure. Ideally
we should always use this structure when touching the L4 slots, instead of
the current cocktail of global variables and constants.

machdep initializes the structure with the default values, and we then
randomize its dmap entry. Ideally machdep should randomize everything at
once, but in the case of the direct map its size is determined a little
later in the boot procedure, so we're forced to randomize its location
later too.
 1.5  29-Feb-2016  riastradh branches: 1.5.2; 1.5.12; 1.5.18; 1.5.20; 1.5.22;
Let the compiler decide whether to inline.

Works around ICE in PCC for now:

/home/riastradh/netbsd/current/src/sys/arch/x86/x86/cpu_rng.c, line 195: bad xasm node type 23
/home/riastradh/netbsd/current/src/sys/arch/x86/x86/cpu_rng.c, line 195: bad xasm node type 23
internal compiler error: /home/riastradh/netbsd/current/src/sys/arch/x86/x86/cpu_rng.c, line 195

This code is not performance-critical.
 1.4  28-Feb-2016  riastradh KNF. No functional change.
 1.3  27-Feb-2016  tls Remove callout-based RNG support in VIA crypto driver; add VIA RNG backend for cpu_rng.
 1.2  27-Feb-2016  tls Add RDSEED and RDRAND backends for cpu_rng on amd64 and i386.
 1.1  27-Feb-2016  tls Add cpu_rng, a framework for simple on-CPU random number generators.
 1.5.22.2  13-Apr-2020  martin Mostly merge changes from HEAD upto 20200411
 1.5.22.1  10-Jun-2019  christos Sync with HEAD
 1.5.20.2  06-Sep-2018  pgoyette Sync with HEAD

Resolve a couple of conflicts (result of the uimin/uimax changes)
 1.5.20.1  28-Jul-2018  pgoyette Sync with HEAD
 1.5.18.2  03-Dec-2017  jdolecek update from HEAD
 1.5.18.1  29-Feb-2016  jdolecek file cpu_rng.c was added on branch tls-maxphys on 2017-12-03 11:36:50 +0000
 1.5.12.1  20-Jun-2020  martin Pull up following revision(s) (requested by riastradh in ticket #1560):

sys/arch/x86/x86/cpu_rng.c: revision 1.16

Use x86_read_psl/x86_disable_intr/x86_read_psl to defer interrupts.

Using x86_disable_intr/x86_enable_intr causes a bit of a snag when we
try it early at boot before we're ready to handle interrupts, because
it has the effect of enabling interrupts!

Fixes instant reset at boot on VIA CPUs. The instant reset on boot
is new since the entropy rework, which initialized the x86 CPU RNG
earlier than before, but in principle this could also cause other
problems while not early at boot too.

XXX pullup
 1.5.2.2  19-Mar-2016  skrll Sync with HEAD
 1.5.2.1  29-Feb-2016  skrll file cpu_rng.c was added on branch nick-nhusb on 2016-03-19 11:30:07 +0000
 1.9.4.2  20-Jun-2020  martin Pull up following revision(s) (requested by riastradh in ticket #960):

sys/arch/x86/x86/cpu_rng.c: revision 1.16

Use x86_read_psl/x86_disable_intr/x86_read_psl to defer interrupts.

Using x86_disable_intr/x86_enable_intr causes a bit of a snag when we
try it early at boot before we're ready to handle interrupts, because
it has the effect of enabling interrupts!

Fixes instant reset at boot on VIA CPUs. The instant reset on boot
is new since the entropy rework, which initialized the x86 CPU RNG
earlier than before, but in principle this could also cause other
problems while not early at boot too.

XXX pullup
 1.9.4.1  01-Nov-2019  martin Pull up following revision(s) (requested by taca in ticket #390):

sys/arch/x86/x86/cpu_rng.c: revision 1.10

Check CPU support of RDRAND before calling cpu_rng_rdrand().
cpu_earlyrng() checks CPU support of RDSEED and RDRAND before calling
cpu_rng_rdseed() and cpu_rng_rdrand().

But cpu_rng_rdseed() did not check CPU support of RDRAND and system had
crashed on such an environment. There is no such case with real CPU but
some VM environment.

Fix kern/54655 and confirmed by msaitoh@.
Needs pullup to netbsd-9.
 1.20.4.1  23-Aug-2024  martin Pull up following revision(s) (requested by riastradh in ticket #799):

sys/arch/x86/x86/cpu_rng.c: revision 1.21

x86/cpu_rng: Fix false alarm rate of CPU RNG health test.

Lower it from 1/2^32 (about one in four billion) to 1/2^256
(approximately not gonna happen squared).

PR port-amd64/58122

RSS XML Feed