Hi Roger, I saw the new homepage that has material from the report. You've been hard at work! I see it's under construction but here are a few comments. There's the wrong fig for the rotten egg cumdev; it doesn't go with the figure caption. I've attached two figure updates for the device variance cumdevs, in case you haven't got them. The page looks good and I like the tone. But who is this intended for and does it replace the analysis page? It looks like it could. Maybe this is a good way to work. In what I've been writing, I leave in phrases that will catch your attention (like "we know nothing(!) about...") and then you respond by reworking them. That saves some unnecessary back-and-forth. I think it's good to italicize some terms when they are defined since we want to establish a working terminology. This pertains to the predictions section an what follows. There is an addition to the blocking section in the "review" doc (attached) I'd like to settle on how we use the term "hypothesis test" as you've probably noticed in the text on event selection. Same intent in my comments on "rigorous/not rigorous". People understand this in different ways and we need to be clear and skillful about it. I want to distinguish between a strict test that sets a closed hypothesis (even if composite) and a significance level, and the open-ended experiment that we do, in which the hypothesis is doesn't fully determine the statistic or event character. It is somewhere between a hypothesis test and a meta-analysis. This will be clarifying for people. Bty, it is consistent with this view to steer away from pvals, as we have been doing, since pvals are strongly associated with strict hypothesis tests, for most people. Its not good to simply nuance our use of the term since we'll inevitably use it in a context where the reader is not aware of the nuance. I'd like to eliminate the use of "hypothesis test" unless we really do a classical test, but I'm not sure how we'd implement that. What do you think? Thanks for copying the exchange that you've had with Leane and Mark. Keep me informed. They sound technically knowledgable and that's the kind of people we need. I hope we can meet at a future gathering. The exchange did sound a bit like a jargon fest, but that's ok since people connect by showing what they can contribute. If they want to go further, they'll need to get clear on what they assume the GCP result implies and whether those assumptions have any support in the data that would warrant modelling. You know, play ball. That would be an interesting start. I do tend to pull back when the jargon starts flying. Basically I feel "there's so much data and so little time" which doesn't allow much room for distraction. Actually, much of what they were saying was interesting and I feel almost ready to engage that kind of discussion. But for me, the real interest in that kind of play comes from mixing knowledge of techniques with knowlegde of the data. And my knowledge of the data is not quite yet in place. Reading the discussion I had one flash. Three subspecies that inhabit the kingdom of science are experimentalists, theorists and modellers. These are like explorers, sages and artisans ( but don't push my metaphors too far :-). Modellers are evermore important these days. They can flesh out understanding. If a modeller meets one of the others he'll talk about all the techniques be has at his disposal etc. A good theorist is quite different. The good theorist will just turn to the experimentalist and say, let's look at your data. Just a thought. Not meant to put anyone down!! Anyway, if people are interested in modelling, they should be clear about how well they can (or can not) state their assumptions and how much the data does or does not support those assumptions. Then they can start modelling. If people are interested in data mining they should know clearly this assumes the gcp result implies data anomalies. They should be able to point to where the data supports that and show why the data does not support DAT or other experimenter effect-like approaches that would assume there are no anomalies in the data. At the least, people should be very aware of these issues. Then they can play hard ball instead of whiffle ball (which is fun too!). We're interested in anyone who's interested in the project. But we're interested especially in those folks who want to play hard ball. Is that it? best, Peter