Arouet wrote:Can you take us step by step through what you're actually doing? Is it that for each trial you are staring at that line going back and forth on that website (in one of the 5 or so different options) 1000 times?

1) I click on the Bell Curve Experiment

2) Bubble "Record" (For a meta-study)

3) Goal and Sound are optional

4) Run Experiment

5) Starts the trials untill there are a total of 1024 trials

6) Experiment ends and gets published in my Experimental Log.

7) Since the statistical analyses in the website is very slightly flawed (flawed because of one-tailed), I conducted my own two-tailed, statistical analysis.

And I still don't understand why at 141 trials you would have pretty well exactly expectation, but 30 trials later suddenly have significance? Does that not suggest variance? Your entire significance seems to be based on the last 30 trials! That sounds like variance.

It may be that the effect-size may be too small that it can only be detected in the long run, which is also true for other conventional experiments dealing with small effect-sizes.

Anyway, in order to find out for sure, I would need to conduct more further research. By the way, Craig is right that variance doesn't usually occurred in the long run and he is correct that the more larger the sample-size is, the more closer the result gets to the expected value. This is why an inference based on a larger sample-size is much more accurate than small sample-size.

Also, I get what you are saying about the meta-study, I'm no expert, but I don't think that's what you're doing: it's just one continuous study. I think calling it a meta-study is confusing as it implies that you are combining a bunch of different studies from different people. You're just continuously adding to your study.

You just said it! A meta-study or meta-analysis is a cross-analysis of the results of several or all independent, homogeneous studies as if they are one large study. This is exactly what I'm doing, Richard Wiseman & Milton conducted their own ganzfeld meta-analysis, even though their meta-analysis were horrendously flawed due to heterogeniety and incorrect statistical analysis.

Again: I don't think you should be just doing trials and recalculating. You should be doing a set number of trials, decided in advance, Then check your results.

Again, all the trials for each study is the same: N=1,024. A sample-size of N=1,024 is a good sample-size for experimentation (which is akin of tossing a fair coin 1,024 times)