Throughout my study in statistics, I learned that just because your experiment failed to find evidence for whatever alternative hypothesis you're testing doesn't mean the alternative hypothesis is false especially under few studies and when it comes to our common-sense in the quantitative or qualitative data. Furthermore, there are many possible explanations for why this is so. The statistical power (which is the probability of finding an effect that is of course if it exist) may be weak due to the lack of proper # of sample-size, the alpha level (commonly 0.05 or 0.01 level) may be too small (e.g. 0.001), effect-size may be small, etc. The idea that since your experiment failed to support your alternative hypothesis; therefore, it's false is one of the most common misconceptions in researchers.

Throughout history, many psi-experiments, e.g. ganzfeld, have been conducted and succeeded in finding strong, replicable, substantial evidence of psi. However, a big question arises over these studies: why is it that there are studies that show positive evidence of psi, yet some or probably few failed to find evidence of psi and the hit rate is inconsistent with the average hit rates in meta-analyses???

Well, think of it this way: Suppose a basketball player claims to get baskets with an average hit rate of 80%. So, you do a test and tell him to throw the basketball 20 times. Now, let's say the player got 8 out of 20 baskets (40%). This is strong evidence against his claim because he failed to do what he claimed, but is it really evidence against???? To find out for sure, we use the power of statistics! The P-Value of the claimant scoring 8 out of 20 baskets if the claimant really does have an 80% hit rate is P=0.0001, i.e. the player who really does have a 80% hit rate can get 8 out of 20 only 1 in 10,000 cases (unless the player got 6 and suddenly decided to miss on purpose)

So, should we reject his claim????

Answer: Absolutely, because the P-Value is too small.

Now, let's say a player claims 50%, so you conduct the test again and at the end, the player got 6 out of 20 (30%)

"Ahha! You failed to reach 50%; therefore, your claim is false!"

Generally, common sense tells us that his claim is false simply because he failed to reach a 50% hit rate, but is this really evidence against??? Again, to find out for sure, we use the power of statistics! The P-Value of the claimant getting 6 out of 20 if the claimant really does have a 50% hit rate is P=0.11 (One-Tailed)

So, should we reject his claim?????

Answer: Absolutely not! The P-Value failed to reach the P<0.05 level, so there is no evidence that the claimant's claim is false.

Now, let's say the claimant threw the basketball 100 times and got 30 (30% hit rate)

The P-Value of the claimant getting 30 out of 100 is P=0.00003 (One-Tailed)

Even though the hit rate is the same as the former, the P-Value was actually different. So, we have strong evidence that the claimant's claim is not true.

This is exactly what happens in psi-experiments!

It's all about the statistical power, especially the sample-size of the study and the undisputable fact that all human performances, even among pros, vary...

If a psi-study has a really small sample-size, then the study has less statistical power, which increases the odds of committing a Type II Error (False-Negative.)

On the other hand, if the study has a large sample-size, then the study has more statistical power, which increases the odds of finding the effect that is of course if it exist.

For instance, take a look at the Autoganzfeld II, Table 1: the replication series produced a total of 40 hits in 151 sessions for a hit rate of 26.5% (z = 0.34, ns). Now, let's put this 40 hits in 151 sessions to the test against the average Ganzfeld hit rate: 32%.

While 40 hits out of 151 sessions is indeed not statistically significant, is this evidence against the 32% hit rate in the Ganzfeld meta-analysis?????

The P-Value of getting 40 hits out of 151 sessions if the 32% hit rate is true is P=0.16 (Two-tailed)

So, is this evidence against the 32% hit rate????

Answer: Absolutely not! Even though the study failed to replicate the PRL ganzfeld studies, it is not evidence against the 32% hit rate in the Ganzfeld meta-analysis.

Furthermore, it is quite possible that the study didn't have enough statistical power to detect the effect..

I hoped that helped ......