# Procedures for Skew Analysis of Obama Inauguration

The skew analysis Dean Radin applied to the GCP data from Barack Obama's Inauguration on January 20 2009, are sufficiently different from our usual analyses that they need a specific description and explanation. Most of the following is from Dean's emails accompanying the two figures on the main Inauguration page.

### Z (hourly skew)

Here's a completely exploratory graph about the inauguration. This shows a hour-long sliding window of one-minute chi-squared (sum of z^2) values, transformed back into z scores. (The standard error of skew = sqrt(6/N), where N = number of samples). The red line shows the moment the oath of office was taken. The positive skew that peaks significantly as the oath was being recited means at that time the distribution leaned too far to the right, i.e., there were too many large scores at a per minute level. In other words, at that time there was too much order at the sample level, where "order" means too many 0s or 1s per sample of 200 bits.

### Odds (hourly skew)

This is a picture of the one-tailed odds against chance for the high skew during the inauguration. The arrow points to the moment of taking the oath of office. Pretty dramatic.

I used a sliding window for this analysis because I expected that that particular moment in time was an instant that everyone was anticipating as a kind of miracle. As such, like in my analysis of the OJ Simpson event, I expected to see a sudden rise in mass coherence when that moment came to pass.

I used one-hour wide sliding windows, with the window centered on real time (i.e., +/- 30 minutes around real-time), and tested both combined (Stouffer Z) and skew of my one-minute chi-square scores. This is what you get with skew.

This is not the result of hours of data snooping. I found it in a total of 5 minutes, as the second test I tried. I like skew because it is a measure of the shape of the distribution, and my sense of what's happening here is that these effects can be seen most easily via distortions in distributions. A positively skewed distribution means the symmetry normally expected was disrupted in the direction of too much juice in the right tails. In this case that meant too much order of the usual kind we're interested in. A mean or median measure would not necessarily see this as clearly.

Because we found some issues, detailed elsewhere, I asked Dean for some feedback. Dean added some further comments, and repeated his specific rejection of criticism based on issues like multiple testing, selection of free parameters, selection of test statistic, use of bootstrap stats, etc.

On a purely exploratory basis, and drawing on previous experience, I guessed how this data might behave given this event. I tried just two windowed statistics. I was pleasantly surprised to find that the second statistic (skew) resulted in an interesting odds spike within a minute or so of what I assumed was the most anticipated moment of that day.

Why windows, why skew, and why positive skew in particular? Because, as I think I previously mentioned, when dealing with an impulse event I imagine that the data fit into a bell-shaped distribution that wobbles over time. When coherence momentarily spikes one way that the bell can react is for it to suddenly lean to the right, i.e. a positive skew. Or, the bell's mean might shift to the right ("mean" here actually refers to variance, as I'm talking about z-score equivalents of chi-squares). Thus I looked at both mean shift and skew, and I felt that both might move to the right. No, I didn't log any of this in a formal predictions registry, but that's not what exploratory data analysis is all about. As I cautioned on my blog, "This is an *exploratory* analysis, so it shouldn't be regarded as persuasive as a preplanned analysis would be."

Note the similarity between this event and my first analysis of Y2K. Those two cases, along with others I can think of, suggest an experimenter effect (EE). Do analyses "contaminated" by EE point to real effects or to psychic statistics? I don't know. It's the joy and the frustration of exploratory data analysis. Sometimes it can confirm intuition about data. Sometimes not.