In case you haven’t followed the chatter about Daryl Bem’s forthcoming paper on evidence of “precognition” and “premonition” (a.k.a. “psi” effects, or more colloquially, psychic intelligence), you can read a synopsis at the Freakonomics blog (link). The comments on the blog page are quite amusing. More interesting is how Wagenmakers et al. have leapt on this as a “teachable moment” for discussing perils and pitfalls in commons modes of contemporary data analysis.
Their lengthy critique of Bem’s research (link) is really interesting. They discuss fallacies that arise when confirmatory and exploratory analyses are confused. They also discuss how a low probability of the data given the null hypothesis ($latex p(D|H_0)$) does not necessarily translate into good reason to*
*discontinue belief in the null in favor of some alternative ($latex H_1$). Your willingness to increase your belief in the alternative depends on your priors,

$latex p(H_1|D) = \frac{p(D|H_1)p(H_1)}{p(D|H_0)p(H_0)+p(D|H_1)p(H_1)}$.

So, if your belief in the null is strong, then even strong evidence against the null can do little to change your beliefs. As Wagenmakers et al state, “[t]his distinction provides the mathematical basis for Laplace’s Principle that extraordinary claims require extraordinary evidence.” Failure to appreciate this is known as the “fallacy of the transposed conditional,” referring to the fallacious consideration of $latex p(D|H)$ as being essentially equivalent to $latex p(H|D)$. They also discuss a Bayesian method of hypothesis testing that helps to guard against the problem that, with large data, “everything is significant.” The method has one examine the relative strength of evidence against the null versus evidence against some clearly specified alternative. All of this would make a great introductory case study for methods students.

Update: What Wagenmakers et al call the “fallacy of the transposed conditional” is also the basis of Kahnemann and Tversky’s well-known “representation heuristic” (link).