Friday, April 18, 2014

Levitation in Paris

I was in Paris the beginning of April, giving a talk at the Sorbonne for the launch of the French translation of my book, Supernormal. While walking about and enjoying the city on the way to the Arc De Triomphe, I passed a levitating man. This was a nice synchronicity given the topic of my book, which just won the 2014 Silver Naulitus Book Award. This is a major book award "for exceptional literary contributions to spiritual growth, conscious living, high-level wellness, green values, responsible leadership and positive social change as well as to the worlds of art, creativity and inspirational reading for children, teens and young adults." 


How is the man levitating? It's an impressive trick, even when you know how it works.



Monday, April 14, 2014

Feeling the future meta-analysis

Before Cornell University psychologist Daryl Bem published an article on precognition in the prominent Journal of Social and Personality Psychology, it had already (and ironically given the topic) evoked a response from the status quo. The New York Times was kind enough to prepare us to be outraged. It was called "craziness, pure craziness" by life-long critic Ray Hyman. Within days the news media was announcing that it was all just a big mistake.  I wrote about the ensuing brouhaha in this blog

But the bottom line in science, and the key factor that trumps hysterical criticism, is whether the claimed effect can be repeated by independent investigators. If it can't then perhaps the original claim was mistaken or idiosyncratic. If it can, then the critics need to rethink their position.

Now we have an answer to the question about replication. An article has been submitted to the Journal of Social and Personality Psychology and is available here

The key phrase in the abstract reads:
"The paper reports a meta-analysis of 90 experiments from 33 laboratories in 14 different countries which yielded an overall positive effect in excess of 6 sigma with an effect size (Hedges’ g) of 0.09, combined z = 6.33, p = 1.2 × 10e-10. A Bayesian analysis yielded a Bayes Factor of 7.4 × 10e9, greatly exceeding the criterion value of 100 for “decisive evidence” in favor of the experimental hypothesis."
In layman terms this means that according to the same standards used to evaluate evidence throughout the psychological sciences that implicit precognition is a genuine effect. This outcome, combined with a meta-analysis of presentiment effects, provides additional evidence indicating that what bothers critics is their belief about how Nature should behave, rather than how it actually does. 

We do not need precognition to predict that the new meta-analysis will not influence the critics' beliefs. Their beliefs, like those of most people, rest upon a naive realist (i.e., common sense) view of nature. 

While common sense is good enough for most basic activities of daily life (not including an understanding of how television, smartphones, GPSs, and computers work), it is not sufficient to account for the larger reality revealed by science. Nor is it capable of perceiving the far stranger and vaster realities that patiently wait for us far beyond the reach of today's science.


Update April 25, 2014. As I predicted, this meta-analysis shows no signs of influencing critics' beliefs. Instead, new objections are invented. The latest is that we shouldn't believe this analysis because Bem was one of the authors and he has a vested interest in the outcome. But based on that logic we are also justified in ignoring any meta-analysis published by avowed skeptics because they have a vested interest in their outcomes. Do vested interests pro or con influence these analyses? Undoubtedly they do. So is it even possible to craft a truly neutral assessment? Probably, but it would take some effort because the published reports would have to be carefully scrubbed clean so the analysts wouldn't know what the topic of their analysis is all about. And somehow other analysts would need to thoroughly search all published and unpublished sources to find every relevant study ever conducted.

I  haven't heard of anyone ever getting funding for this type of uber-neutral analysis, but if you do know a source of funding that might be interested in supporting such an effort, please let me know.

Update August 14, 2014. And now some critics are claiming that the most sophisticated usage of meta-analysis itself is flawed, throwing into doubt everything published in psychology, biology, medicine, ecology, and all other disciplines that rely on meta-analysis for assessing replication of small effects. This is a "move the goal-post" strategy: When evidence is not to your liking, change the rules so it's no longer offensive. Now the only acceptable evidence is based on experimental designs that are publicly preregistered. Why any critic thinks that will solve the problem is beyond me. 


Thursday, April 10, 2014

No one pays any attention

Do scientists pay attention to psi research? Some skeptics would have you believe that this topic is so far from the mainstream that no one takes it seriously. What do article impact metrics indicate?

For the article Predictive physiological anticipation preceding seemingly unpredictable stimuli: A meta-analysis, which examines experiments studying what I've called "presentiment," Altmetric reports that this is "one of the highest ever scores" in the journal Frontiers in Psychology (ranked #3 of 1,714 articles). The average view of a journal article is typically a few hundred, and that's for a very popular paper. This paper has 47,765 views so far. 

For the article Predicting the unpredictable: Critical analysis and practical implications of predictive anticipatory activity, Altmetric reports that this article "is amongst the highest ever scored" in Frontiers in Human Neuroscience, with 10,584 views.

For the article A call for an open, informed study of all aspects of consciousness, Almetric reports that this article is "one of the highest ever scores" in Frontiers in Human Neuroscience, with 19,524 views.

For the article Electrocortical activity associated with subjective communication with the deceased, Almetric reports that this "is amongst the highest ever scored" in Frontiers in Psychology, with  6,121 views.

In other words, compared to most journal articles on mainstream (meaning, conventional) topics, these articles are reaching into the rarefied domain of extreme scientific impact -- hundreds of times more interest than the typical article.

I've found a similar response every time I've given a talk to an academic or technical audience. While opinions differ on how to interpret psi data and vigorous debates are common, there is no question that scientists and scholars are interested. And isn't that what a healthy science is all about -- the excitement of exploring the frontiers of knowledge?

As Gandhi famously said, "First they ignore you, then they laugh at you, then they fight you, then you win." Based on interest and impact metrics, it appears that if this were a political battle (which it basically is -- the politics of ideas), as far as the actual mainstream is concerned (mainstream in terms of numbers; not that small minority that desperately holds onto the status quo), I'd estimate that we're somewhere between fighting and winning.

Update: May 7, 2014.  For the sake of curiosity, I wanted to see how my own scientific impact metric would fare against that of the average scientist. According to a study by the London School of Economics and Political Science the average tenured professor from the disciplines of law to economics have (Hirsch) h-indexes ranging from 2.83 to 7.60, respectively. The average h-index varies widely by discipline, but Hirsch estimated (based on physicists) that after 20 years a "successful" scientist will have an h-index of 20, where success in this context is equivalent to a full professorship in physics at a major research university. According to Google Scholar, my h-index is 22.  

Friday, April 04, 2014

Now it becomes clear

As I've previously mentioned, Wikipedia has a problem with topics that fall outside a tightly constrained, naive view of reality. That there are different opinions about such topics as homeopathy, parapsychology, or energy medicine, is not surprising. But it is disappointing (and on the verge of abetting libel when it comes to biographies of living persons) when an otherwise useful encyclopedia maintains a policy of presenting such topics with a systematic negative bias.

Attempts to edit these articles to provide more balance are summarily ignored, and even neutral, well-intentioned editors have been banned. Articles with citations only from unreliable, uninformed, or cynical sources might be useful for promoting favored ideologies, but only in an Orwellian world could such an encyclopedia be considered anything but a work of fiction. Indeed, this very blog was labeled an "unreliable source" when I've simply pointed out an easily demonstrable mathematical fact.

I used to wonder why those in charge of Wikipedia would allow such biases to persist. I imagined that they were simply uninformed at how a small group of enthusiastic fact-deniers had highjacked the system. But now something has happened that illuminates the problem.

On Change.org, the Association for Comprehensive Energy Psychology posted a petition to ask Jimmy Wales, one of the founders of Wikipedia, to "create and enforce new policies that allow for true scientific discourse about holistic approaches to healing." The ACEP posted this position because publications relevant to their interests have faced the same sort of systematic negative bias as articles on psi research. The response by Wales was as follows:
No, you have to be kidding me. Every single person who signed this petition needs to go back to check their premises and think harder about what it means to be honest, factual, truthful. Wikipedia's policies around this kind of thing are exactly spot-on and correct. If you can get your work published in respectable scientific journals - that is to say, if you can produce evidence through replicable scientific experiments, then Wikipedia will cover it appropriately. What we won't do is pretend that the work of lunatic charlatans is the equivalent of "true scientific discourse". It isn't.
Besides the snarky insult, this response reveals more than ignorance. It indicates that Wales has allowed his amygdala to trump his frontal lobes. He might benefit from re-reading his own guidelines on the "Five Pillars" of Wikipedia, especially the pillar recommending that articles are to be written from a neutral point of view.

ACEP provides an evidence page that shows there already is "work published in respectable scientific journals." Yes, energy psychology techniques seem strange, but so what? There are all sorts of things that are not well understood yet, but are nevertheless backed by solid empirical evidence (like psi). And in this particular case, the methods are not merely empirically intriguing, they're also clinical useful.

And so now it becomes clear why Wikipedia has become a bastion of reactionary lore. It assumes a quaint form of reality that would have been appropriate to promote in the 17th century, but that view is neither appropriate nor useful nor correct in the 21st century. As Tolstoy once said:
I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.