Page 2 of 2

Re: Podcast Critical Reasoning for Beginners by Oxford Unive

PostPosted: 24 Sep 2011, 10:52
by craig weiler
No, you're not applying the same standard actually. You trust the physicists and you will believe their results even if their results are less statistically significant than the psi results. How do I know this? Because the statistical significance of the psi results are absurdly high. Billions to one with higher than 95% confidence. This has been pointed out to you by Ice Age. You've ignored it.

Re: Podcast Critical Reasoning for Beginners by Oxford Unive

PostPosted: 24 Sep 2011, 21:37
by Arouet
I didn't ignore it. I addressed it directly. You just ignored my response.

If a small effect size result is due to error/bias (as they suspect in the CERN case) it will show up as statistically significant - often highly so with great odds against chance. Because it's not chance - it's error/bias. That's why the small effect sizes are so hard to trust, because it is so hard to get rid of small biases/errors. The difference between you and the CERN scientists is that the CERN scientists (and most scientists I believe) readily acknowledge this.

There is no-one who is suggesting that this CERN report should be taken at face value - including the people who did the report. They are asking for criticism and replication. But here's the kicker. Let's just say that the replications are successful, and the nutrino result is found to be accurate. You can bet dollars to donuts that the investigation won't stop there. People will study it like crazy and all sorts of applications or related discoveries will take place. Predictions will be made and confirmed/not confirmed. The evidence will mount and mount and mount in addition to the applications.

That's where parapsychology is currently stuck. They get the small degrees away from chance in their results - but then seem to hit a wall. I'd feel far more comfortable believing in the conclusions from parapsychologists if they could expand beyond that. Make real, demonstrable predictions/applications.

Example: they had that group ostensibly get better than chance results at the stock market/track. Statistical significance. But what happened next? They just stopped. There is no-one (that we know of) making millions using remote viewing at the track. Tracks haven't banned it like casinos do card counting. Now, this isn't evidence in itself but the point is it would make it easier to accept if the applications extended beyond an experiment that produces a small effect size.