I asked Peter for his thoughts on possible explanations for
the sometimes quite strong negative deviations of the netvar
statistic, for example as seen in a number of large,
organized meditations. It is actually a general question
that bears on mechanism. This page is a discussion of
possibilities or tentative models, which leads to the
conclusion that it is at least reasonable to consider
bit-level autocorrelation as a fundamental part of the
picture. More work is needed, but this is interesting and
thought provoking material:

### Negative Going Netvar

We can frame our results as follows. We look at the 1-sec
Stouffer Z
and find that Var[Z] and Var[Z^2] deviate positively over
the events.
These are the netvar and covar. The underlying stat is the
stouffer
Z. It is the normalized trial mean. In thinking about what
is going
on we can take Z to be a normal score, OR we can consider it
to be a
mean value of the reg trials OR we can consider it to be a
bitsum
over regs. If we ask how to interpret a pos. or neg.
netvar/covar, we
need to know what level of data structure we're interested
in (Z,
regs or bits). We'll answer a bit differently in each case.

### Z, the network level

If we just consider the mean output, Z, we're not asking
anything
about the underlying structure at the reg or bit level.
Var[Z] and
Var[Z^2] can vary, because they are random variables, and
finding a
pos or neg trend doesn't tell us much, other than there is a
deviation for these stats that has a small pvalue.

Note: I hope your email reads Greek fonts; below I use
β for beta and Δ for delta and μ for mu...

### z, the reg (trial) level

If we consider that Z is based on the sum of z's from
independent
devices, then we can ask how a deviation in Var[Z] or
Var[Z^2] might
affect the individual reg statistics. Different senarios can
be
examined by simulation. For example, the netvar/covar
deviations may
be due to contributions from all regs. This is what we
currently
believe. We can express this by saying that the effect is
distributed
across the network or by saying that the regs are correlated
at the
1-sec level. This is demonstrated by a simple simulation
that I've
done: Input reg trial values as normal z-scores (taking,
say, 50 of
them to represent a network of 50 online regs). I can run
this to
simulate events a few hours long and look at the netvar and
covar
over ~200 events as we do in the formal experiment. Using
standard
normal scores, the netvar and covar have zero deviation, and
all is
copacetic. Next, try this while adding a small, random
deviation to
the mean or variance of *the individual reg distributions*.
That is,
instead of drawing the reg trial values from a N[0,1]
distribution I draw from a N[0+β, 1+Δ. The β
and Δ are
small random shifts in the
reg mean and variance. I choose the shifts randomly each
second, but,
each second, the same mean/var shift is applied to all regs
online.
In the past, you have termed this an average mean shift of
the regs
when discussing the Stouffer Z^2 increase. Here is what the
simulation shows: we can reproduce the netvar/covar effects
by taking
β and Δ to be randomly selected in the range
± 0.01.
That is, the
mean(and the variance) of the underlying distribution of reg
trial
values is allowed to vary slightly, each second, but with an
*average* variation of zero. This guarantees a Mean[Z] =0 ,
as we
find for the event data, but yields a positive netvar/covar
variation
on the scale of what we see for the events. We learn one
thing here:
β affects the netvar and Δ affects the covar. So we can
say our data
is *consistent* with a correlated mean shift (from the
netvar
results) and a correlated variance shift (from the covar
results) at
the reg level, and that these shifts on average are zero and
fluctuate around zero by about 1%.

A bottom line comment is that since we don't see deviations
in the
Mean[Z] or the devvar for the real event data, any
deviations in
underlying distributions at the reg level probably are
fluctuating
with zero mean, that is <β> = <Δ> = 0, as implemented in
this
simulation. Also, for the β,Δ deviation model, we are only
able to simulate positive deviations in the netvar and covar.

There are other possibilities and we will need to do some
work to
assess them. For example, it could be that only one or a few
regs
have deviating means and variances, or that the deviations
are
restricted to regs in a geographical area (which would give
a
distance effect to the pair-products...). More simulations
are needed
and I think you can see that it quickly becomes a project. I
suspect
we have some leverage since we can monitor the devvar for
these
different situations. That might allow us to control for
different
senarios. We can also look directly at the distributions of
the
individual regs, both in simulation and for the event data.
This will
most likely let us put some limits on possible senarios.

There is another point that needs to be stated. Currently,
the
netvar/covar results cannot be distinguished from a
simulation where
reg pseudo-data is generated with N[0,1] (or, equivalently,
Z is
generated from N[0,1] ) *and* then events are selected by a
filter
giving an average event Z-score of 0.3. This is an
experimenter
effect senario. I currently have little *in the data* that
distinguishes DAT-like selection from the "imposed"
β,Δ
mean/variance shifts. The distance correlation and the
netvar/covar
correlation (which comes in at ~ 1 to 2 hour blocking) may
be an
argument against selection/experimenter intuition, but it's
weak at
this point.

### 010101, the bit level

If we go down to the bit level we can start to address your
original question. If we consider Z to be based on a bitsum over all
regs, we
can ask how the bit distribution must deviate in order to
give the
netvar/covar results. Since the data are XOR'd, we need to
consider
that as well. I've only thought about this for the netvar.
The covar netvar/covar results cannot be distinguished from a
simulation where
In what follows, consider that the effect is evenly
distributed
across regs. Then we can take the network as a single bit
generator
with 200N bits/second (ie, from an N-reg network). I am
aware of two
ways to alter the binomial output: alter p, the bit
probability, or
impose a short lag autocorrelation. The binomial variance
(which is
the netvar in this case) goes as p-p^2. That is, it is an
inverted
quadratic with a maximum at p=0.5. Any shift in p, positive
or
negative, will **lower** the measured variance. The XOR passes
the
variance change, that is a negative-going netvar will occur,
with the
same magnitude, whether there is an XOR in place or not. So
we can't
obtain a netvar **increase**, as we see for the events, by a
shift in p.
simulation?
What if p is fluctuating second-by-second along the lines of
the β,Δ simulation.
Here the XOR will still cancel any mean shift on
average
and the variance will just be the expectation of the
variance over
the distribution of p's fluctuations, ie, negative-going. So
by this
reasoning, we can't increase the netvar by diddling p.

What about imposing an autocorrelation on the bits? Suppose
we have
an autocorr at a lag of 1 with a strength μ. In this case
we find
that the variance V changes to (1-μ)V. So the variance
decreases for
a positive autocorr and is increasing for a negative μ
(anti-correlation). A simple XOR will invert this. This
means that a
positive-going netvar *could* indicate a short-lag
anti-correlation
at the bit level and a negative netvar trend *could*
indicate a
positive bit autocorrelation. From this corner of
senario-space, we
are suddenly talking a talk we haven't done before. There is
a
suggestion that the netvar result implies an autocorrelation
in the
regs' bitstreams. [It may also put the PEAR data in a new
light -
they describe operator experiments as a shift in p; here,
we'd say
that, since the regs are XOR'd, they should really discuss a
real
autocorrelation in the devices, not a p-shift of the XOR'd
output...]

The bottom line to all this is that we can imagine a
positive or a
negative netvar by imposing an appropriate autocorrelation
on the
bitstream. One nice thing is that the sign of the
autocorrelation
gives a very different character to what happens to the
data: up is
very different than down. This could be mirrored in
characters of
global consciousness being very different depending on
whether netvar
trends are positive or negative.

I hope this fleshes things out a bit (no pun intended). This
story is one more strong reason for carrying our analyses down to the
reg level where some answers might be teased out of the data.