Global Consciousness Project

What is the nature of Global Consciousness?


The following are brief notes of general principles and particular implementations that will be developed and refined as the project grows. The system is too large and complex to pre-plan all details, and these notes serve as an outline to sketch some possibilities and requirements. The specific formal tests of the GCP hypothesis use the Chisquare procedure described in the methodology section.

Protecting the science

There are three areas requiring some preparation and development of effective procedures to ensure that the data are valid.

  1. Data Integrity: All data are monitored for conformance of raw data with known parameters (e.g., checksums) at entry into analysis array.
  2. Data conformance with theoretical expectation is validated by calibration against null models; the calibration suite specifies exception thresholds.
  3. Data security is augmented by redundant offline data storage; cross-validation probes can identify exceptions.


The EGG Project as a scientific program will depend upon a progressively refined protocol with clearly defined specifications and criteria for selecting public events and corresponding data segments of interest, including time-stamped documentation and a registry for predictions.

As described in Methodology, the primary analyses will be based on equally weighted contributions from all active Eggs, with no regard for distance from the Global Event. In secondary analyses, comparisons may be made between relatively local and more distant Eggs.

The exact algorithmic procedures for each formal analysis must be specifed as part of the prediction, before the data are examined. This is done most often by indicating that the standard analysis will be used. This and other defined analyses that we have used over the course of the experiment are detailed in recipes that, if followed, will duplicate the original GCP analysis. (In some cases, extra data will have been accumulated from dial and drop eggs.)

With regard to time, our formal assessments will consider only data which are co-temporaneous with the event of interest. For extended events, we will use the timing and segmentation indicated in news reports. Analyses for point events will center on the identified moment, and consider the data segments accumulated during a few beats of a global consciousness pulse (perhaps 5 to 15 minutes) before and after the event.


The focus for most analyses will be anomalous shifts of the segment mean. As noted, the standard test for deviations from expected variation will be a Chisquare comparison of the composite deviation across all Eggs during the specified event against chance expectation. This composite will be a normalized and squared sum of the Z-scores (squared Stouffer Z) for all Eggs and all predefined segments (e.g., seconds or 15-minute blocks). We will make exploratory assessments of other parameters, such as intercorrelation of the Eggs during an event, as possible indicators. Correspondence of computed deviations with the time-line of predictions will provide the primary criteria for statistical evaluation.

A second major focus will be on correlation matrix calculation and analysis, Intercorrelation. This is intended to provide a general assessment of the hypothesis that some influence might affect all of the REG devices. Application will be made to signed deviations, and also to squared deviations (Chi-square). We may be able also to include a search for patterns or structure, as well as extreme value search algorithms.

Another focus will be on the transitional probabilities within the sequences of interest. These include a number of time-related analyses, and such measures as Wackerman's Omega-complexity. In this same venue we may apply Atmanspacher and Scheingraber's Scaling Index Analysis. We also may explore correlations with environmental variables including automatically registered global-scale measures such as sidereal time, geomagnetic field fluctuations, and seismographic activity.

An excellent introduction to the basic statistical character of the data gathered in the GCP can be found in a description of the probability and statistics of a long-running experiment called The RetroPsychoKinesis Project.

A list of some analyses which would be useful but which require skilled programming (e.g., with perl scripts) may be found in the to do page.


These notes concern presentations of the data and analytic results. We expect them to be attractive and elegant, as well as informative. Some displays will indeed be aesthetically pleasing by their nature. All of the analyses will be enhanced by some form of refined and communicative visual (possibly auditory) display. We will prefer graphics which follow Edward Tufte's waste no ink dictum, of simplicity, readability, and elegance. On the other hand, dynamic, flowing and changing displays will be used where sensible, and displays may be multi-dimensional, with 3D, color, motion, and sound if these help to render the information accessible.

A list of some displays which would be useful and interesting may be found in the to do page.


Control data are needed to establish the viability of the statistical results from active data generated during events specified via the prediction protocol. The control data are expected to produce chance results because by hypothesis no engaging event is specified. The complex nature of the data in the Global Consciousness Project and the situation-dependent nature of the predictions requires specially designed procedures for ensuring that the statistical characterizations of the data are valid. We use a multi-pronged approach to this issue, with several complementary tactics all directed to the establishment of a theoretical standard for statistical comparisons. The main components are quality-controlled equipment design, thorough device calibration, and a procedure called resampling. The combined force of these efforts ensures that the GCP data meet appropriate standards, and that the active subsets subjected to hypothesis testing are evaluated against chance expectation as well as a large of surrounding control and calibration data. See also Appendix, Nelson et al., FieldREG II.