Saturday, May 28, 2016

Kernel Skew: A Normalized Randomness Metric

This is based on the article entitled "Kernel Skew: A Signed Normalized Randomness Metric" from May 22, 2015. I realized that a signed metric was undesirable from a computing standpoint, so I took advantage of the apparent fact that no one cared about this definition to redefine it as being nonnegative. Here is the corrected article:

We define the "kernel skew" U of a mask list with expected kernel density S0 and actual kernel density S as follows:

  If (S <= S0): (U = S / 2S0)
  Else: (U = 1 - (S0 / 2S))

Note that because neither S0 nor S can be zero, the range of U is (0, 1), as opposed to [0, 1], so this is well suited to twos-complement representation.

U is defined such that its expected value is 0.5.  Increasingly small values of U imply increasingly excessive destruction of information in the mask list iteration process, whereas increasingly large values of U imply increasingly excessive preservation of information. For example, in the limit of infinite block count, a permutation of all possible masks would have a skew of one (because permutation conserves information); whereas a function which iterated to a single constant in every case would have a skew of zero. (Remember that block count equals mask count in all scenarios in which kernel density is defined.) An ideal TRNG, on average, will preserve only S0 fraction of its initial mask list once the list has been iterated an infinite number of times. Therefore the skew of its mean kernel density will tend toward 0.5.

As with kernel density, kernel skew relating to a block count of Q is referred to as "order-Q kernel skew".

Do not make the mistake of assuming that for an ideal TRNG, the expected skew is 0.5; the only guarantee is that the skew of the expected density is 0.5! So evaluate the mean kernel density over many iterations, then evaluate its skew once at the end.

This formula is implemented as dyspoissometer_kernel_skew_get() in Dyspoissometer.

No comments:

Post a Comment