Access Denied wrote:Jack and Chrlz, if you read Tom's article about the project in the MUFON journal linked to in the OP and the following brief presentation..
Thanks AD, I have now resolved my conection difficulties and had a bit of a looksee. but...
I think it will answer many of your questions...
Surely you know by now that my questioning of stuff is *endless*! It answered a few, but raised many many more...
I have to confess that I am still a bit puzzled about what was thought could be achieved, indeed it looks a bit like there was a hope that there would be some remarkable anomalies in the data that would correspond with a subject's report of an 'experience' - in which case I'm sure eager researchers would have jumped at the chance to analyse it in more detail. I'm assuming that didn't happen (despite some reports of 'experiences'), and that might explain why there seems to have been a loss of interest.
From a methodology point of view, i have *lots* of concerns with what i have read, but without fully understanding the objectives of the project, I might be being a bit harsh.. (which has never stopped me in the past..

)
As a quick example, in the first pdf I found this:
A word about our scientific approach—a good scientific
test is conducted in a double blind fashion.
There's an *awful* lot more involved in a good scientific test..
In this study, that
would mean that one subject would be tested with a real test
unit, and one would be tested with a false test unit.
Umm, no, it doesn't. (That's more like a 'placebo'! - how on earth could you 'test' them with an empty box???) It is
not how double-blind testing would apply here - eg, that *might* involve the researchers being unaware of which 'black box' data set belonged to which subject (and the collection of additional data from random people using real black boxes) Then the researcher/statistician would have to examine the 'black box' data from all the subjects looking for anomalies,
without having any access to the subject's reports of possible 'experiences'. If instead they checked the subjects reports first and then went poring over their data looking for matching anomalies - then the accusation of confirmation bias could be made...
Now I'm not suggesting that sort of approach would necessarily satisfy the double blind requirement here (indeed, that req may not even be relevant to such a project - without understanding a bit more, I just don't know). But my point is that the quote above does suggest an unfamiliarity with the various tests that can (and should) be applied to any 'science' that wants to be taken seriously.
We instead opted for a separation of duties. As the data
collector, I was to never know the name or address of the
subjects. I was to only know the name of the researcher...
?? So, by choosing another type of test, one can just dismiss/replace the first one? Or just pick and choose the tests the researchers would *like* to apply? Science just doesn't work like that. That's one of the reasons why peer review is applied... (and I'm applying a bit of it now....

)
I was also rather concerned when I read the comments about how the researchers would 'know' if the box was tampered with or 'cheated'. I immediately thought of several ways one could cheat, that would almost certainly be undetectable - and it has to be acknowledged that the subjects might have a strong motivation to do so. It's not as if they didn't have a pretty good idea of what the researchers were seeking, and 'confirmation' of their claims would be in their interest..
Anyway, I'd be interested to see any 'deeper' documentation and/or discussion of the methodology and primary aims of the project.
"To wear the mantle of Galileo, it is not enough that you be persecuted by an unkind establishment. You must also be right." - Robert L. Park (..almost)