Control Data and Theoretical Comparison Standards
The generation of appropriate control data in the Global Consciousness Project (GCP) is complicated by variable temporal and spatial aspects of the specified events. As a consequence, it is desirable to have a uniform standard, independent of the particular experimental datasets, against which the
active data may be compared. Given a suitable, well-designed random source, this role can be fulfilled by theoretical expectation, in particular, the normal approximation to the appropriate theoretical binomial distribution. Thus, although specific comparisons against empirical controls can be made in the course of our analyses, the summary presentation of results refers in general to the theoretical standard. The analytical justification for this strategy derives from three perspectives:
- Calibration data show very good correspondence to theoretical expectations, with the variations expected by chance.
- Resampled, non-active data taken in the same context with the experimental data differ little from theoretical expectation.
- Comparisons of active data against the parameters of the resampled, non-active data distributions yield essentially the same results as comparisons with theory.
The random event generators used in the Project incorporate three special measures to ensure nominal performance. First, only high quality components are used in sophisticated, state-of-the-art hardware designs. Second, an XOR of the raw bit-stream with an alternating or balancing template eliminates secular bias of the mean. Third, the actual experimental data are sums of a large number (200) bits, mitigating all residual short-lag autocorrelations and other potential time-series aberrations. All REG devices are subjected to calibrations prior to the actual experimental application. Typical calibration results for a PEAR Micro-REG, of the design used in the GCP are given in Table 1, which summarizes the distribution parameters for four independent calibration datasets, none of which is significantly deviant in any parameter.
* The expected value for Kurtosis is normalized to zero for the normal distribution, and calculated as -2/N where N is the number of binomial samples.
In addition, the standard calibration analysis includes comparisons against theoretical predictions for the frequency of counts, statistics for blocks of 100 and 1000 trials, runs between consecutive high trials, runs between consecutive low trials, the arcsine distribution for proportion of of 50 trial runs above the mean, and autocorrelation functions for raw data and 50-trial blocks. All together, the analysis suite comprises 12 separate (though not necessarily independent) tests for each batch of calibrations. In the full battery of test scores for the data summarized in Table 1, there are a total of 48 tests, two of which are
significant at p = 0.05 or less, and this is just the proportion of such results expected by chance. The Bonferroni-adjusted p-value for the most extreme outcome of the 48 different tests is also non-significant. Thus, according to a broad spectrum of canonical calibration tests, the random event generator performance is statistically indistinguishable from theoretical expectations. These are typical results for the devices employed in the project.
In FieldREG applications, upon which the GCP technology is based, it is not always feasible to collect matching
control data because many potentially important situational factors cannot be maintained. Usually the best that can be done is to take data in non-active time periods prior to or after the active data segments. For example, control data for a theater performance can only be taken before or after the performance, or between its acts, when the prevailing ambiance is quite different. When it is feasible to take data in a given environment before and after the designated experimental segments, some of the surrounding time periods may themselves be subject to the same influences as the active segments. (Indeed, even in laboratory experiments there is evidence that traditional "control" data may not be immune to anomalous effects of consciousness.)
Therefore, the standard analysis of FieldREG data includes a resampling procedure whenever the data file contains as much or more data in non-active segments as in those defined as active for the application. A pseudorandom process is used to identify and extract segments matching in number and size those designated as active data from the surrounding undesignated data. This resampling process is repeated 1000 times, allowing the construction of a distribution of outcomes against which the results for the pre- defined, active experimental segments may be compared.
To provide a specific example, Tables 2 and 3 show the outcome of this standard segment-matching resampling procedure and an arbitrary resampling of the same data, using a dataset from a strongly deviant portion of a database generated in sacred sites in Egypt. Table 2 shows the original output from the analysis program with data extracted from the file for a single day, comprising about 2.5 hours of active data in nine segments taken in the Mycerinus and Khufu pyramids, surrounded by several hours of non-active data. (We should note that the non-active designation is relative to the specified analysis category — the day’s recording may include active segments from other analysis categories. This increases the conservatism of the analysis in proportion to the extent that deviant data are included by chance in the comparison distribution.)
Report of Resampling Analysis Found field.dat with file size 75611. Data group (chant): Range Z P(Z) T P(T) 14741-15881 0.3770 0.3531 0.3724 0.3548 15881-16667 1.9673 0.0246 2.0209 0.0216 41466-41973 0.0377 0.4850 0.0358 0.4857 41979-43464 1.9414 0.0261 2.0306 0.0211 43464-44479 2.6589 0.0039 2.6861 0.0036 44483-45230 -1.3453 0.0893 -1.3554 0.0876 45230-46112 -2.1333 0.0164 -2.1601 0.0154 46679-48913 -0.4279 0.3344 -0.4272 0.3346 48913-52798 -0.6103 0.2708 -0.6014 0.2738 Active data 12681 of 75611 ( 0.1677) Bonferroni adjusted P-value of greatest deviation: 0.0683766 9 D.F., Chi(Z) = 21.769( 0.0096), Chi(T) = 22.610( 0.0071) Performed 1000 resamplings for group (chant). Distribution of Z scores: M=-0.155244, SD=0.965097 Maximum Chi-Squared is 24.5014 A total of 2 out of 1000 resamples exceed the test value. Average resampled Chi-squared: 8.59869 +/- 3.55155 on 9 D.F. Resampling-Corrected Chi(Z): 22.785 on 9 df, P= 0.0067
Table 3 shows a
calibration analysis for this same database. In this case, a set of arbitrary offsets was defined by taking segments of 1000 trials spaced at 10000 trial intervals instead of using the segment definitions of the actual field application.
Calibrationfrom Egypt, Giza2 (Oct 17)
Report of Resampling Analysis Found field.dat with file size 75611. Data group (arbcal): Range Z P(Z) T P(T) 10000-11000 -0.7916 0.2143 -0.7909 0.2145 20000-21000 0.2012 0.4203 0.1968 0.4220 30000-31000 1.4445 0.0743 1.3954 0.0815 40000-41000 -0.7155 0.2371 -0.7102 0.2388 50000-51000 -0.5545 0.2896 -0.5430 0.2936 60000-61000 -0.6842 0.2469 -0.6846 0.2468 70000-71000 -0.7737 0.2196 -0.7682 0.2212 Active data 7000 of 75611 ( 0.0926) Bonferroni adjusted P-value of greatest deviation: 0.675705 7 D.F., Chi(Z) = 4.640( 0.7038), Chi(T) = 4.469( 0.7244) Performed 1000 resamplings for group (arbcal). Distribution of Z scores: M=-0.00127839, SD=1.07619 Maximum Chi-Squared is 28.1562 A total of 815 out of 1000 resamples exceed the test value. Average resampled Chi-squared: 8.10615 +/- 3.88071 on 7 D.F. Resampling-Corrected Chi(Z): 4.007 on 7 DF, P= 0.7790
In both cases, the Chi-square, noted as Chi(Z) is associated with a probability that is similar to the proportion of the 1000 resamples that exceed the test value. A Resampling-Corrected Chi(Z) based on the parameters of the distribution of 1000 Z-scores differs little from the theoretically based value, and the average resampled Chi-square does not differ from its expectation, or degrees of freedom. Thus, in this example where a large composite anomalous deviation is found in the active data, both the original, experiment-based resampling and an arbitrary calibration resampling yield results consonant with theoretical expectation.
Combining the calibration and resampling perspectives, the same sort of calibration resampling as was done for Table 3 was performed on all the Egypt datasets. There are ten of these, with amounts of data varying from about 60000 to 190000 trials. The resampling was based on arbitrary specification of 1000-trial (15 minute) segments at 10000-trial intervals. Only one of the 10 datasets showed a significant Chi-square, at p = 0.031 (Bonferroni-adjusted p = 0.31), despite that the random placement certainly would by chance often have included parts of the active data segments. The composite Chi-square for all these resampled data from the Egypt application is 85.012, with 81 degrees of freedom and an associated probability of 0.359. Thus, the data indicate a well-behaved random source when arbitrarily sampled; only when those data segments specified by the FieldREG protocol are considered does the data sequence exhibit anomalous deviations.
These examples demonstrate the complex structure of the FieldREG databases and illustrate the issues associated with adequate controls. The calibration and resampling results shown here clearly indicate that comparison of FieldREG data against theoretical standards is appropriate.
The GCP analyses are designed in light of this background research, also to use theoretical expectations as the primary comparison standard. However, in addition, the same precautionary internal comparisons using resampling techniques are included in the analysis suite.