Why is "white noise" generated from uniform distribution sometimes autocorrelated?
I am trying to understand properties of different time series models. In order to be a white noise $w_t$ must follow three conditions:
In practice in most cases normal distribution is used to simulate the white noise (at least from what I've observed in different implementation in R or python). I started wondering if I can use any other distribution with zero mean and did this:
If you are using a good random number generator (like the one in R) then, by definition really, you will get significant XXX about .05 of the time (or whatever your cutoff is for significance. XXX can be almost anything. Autocorrelation, differences in means, whatever.
Any autocorrelation you find in your sample data is "real". It's just a question of a) How likely it is that you got this level of autocorrelation in a sample if the population has none and b) Whether this autocorrelation is big enough that it has to be dealt with in some particular way. It will never be exactly 0.0000, but how big is it?
I went ahead and followed @StephanKolassa's suggestion (running the same simulation many times without resetting the seed in between is equivalent to running the simulation with many different seeds ...) I also tried it with pfun(rnorm) (the same simulation but with Normal rather than uniform deviates), with similar outcomes. You could try this with any of the r* (random-deviate) functions in R, although you'd have to change the code a little bit to use distributions that don't have default parameters (e.g. rgamma(n) won't work; you'd have to change the code to specify at least a shape parameter).
As expected, the histogram of p-values is uniform; plotting the log10 of the p-values emphasizes that small values do show up ...