A little while ago I wrote a little post titled Skeptics Gone Wild, in which I criticized the use of an argument, which I classified as an ad hominem, against Jenny McCarthy that goes like this:
Jenny McCarthy speaks of dangerous “toxins” in vaccines, yet she gets Botox shots, which include botulinum, one of the most toxic substances around, right on her face.
That post sparked a mini-war in the comments with Tom, of Dubito Ergo Sum, who disagreed with me (see comments on my Skeptics Gone Wild post). That mini war then spilled over on Twitter where we had a brief, so to speak, exchange of messages. I could post screen shots of the exchange but I’m not gonna waste time, as Tom has written quite an extensive entry in his blog about the whole thing, titled In which I piss on the ‘Dude’s rug.
Now these blogging “wars” tend to get longer and longer with each reply, so I will not go over Tom’s entry point by point but I will add some clarifications about the main points that he makes, in an effort to keep these entries as short as possible, and since I am not interested in conflict, but dialogue.
It appears to me that both myself, and Tom, have been affected by Phil Plait Don’t be a Dick talk, in different ways. I took Phil’s talk and turned it around on myself, and understood, and agreed with, what he was saying. Tom appears to have taken the opposite stance, the “hell no” stance that people like PZ Myers and Richard Dawkins seem to favor. Which is fine I guess; what most of us are doing is highly personal and each of us will make decisions about how to go about it.
However, the more I read Tom’s replies, tweets and blog post, the more I agree with Phil’s talk, because I can see first hand how some of the points that he makes, and he does make some good points, are affected, at least in my eyes by the, dare-I-say arrogant, way in which he, at times, communicates them.
During our exchange, I have been referred to as: a self-proclaimed skeptic, poor Skepdude, a springboard, and apparently somehow I’ve led Tom to the pessimistic expectation that I would not approve his last comment on my blog, the only one of his comments that went to moderation for some weird reason, thank you WordPress, and wasn’t approved until later that day. Sarcasm is acceptable in debating I grant, but I must ask: how necessary is it when you have a good argument to make AND an audience that, presumably, understands logic? To me Tom’s reliance on sarcasm along the way means that he’s either getting personal satisfaction out of its use; or he thinks the audience witnessing our discussion will be more easily persuaded via cheap shots than a good argument, or that the sarcasm will make his real arguments more persuasive, or a combination, or some other reason I cannot think of.
So first, let us go over what sparked this whole thing specifically the personal attack on Jenny McCarthy: is it or isn’t it an ad hominem? to which my response is: Does that really matter much in relation to the overall message I was trying to convey? Even if I turn out to be wrong on my classification of it as an ad hominem, does the personal attack on Jenny McCarthy have any bearing on the arguments that she makes? I will refrain from repeating myself at this point. I will only direct the reader to my original entry and ask them to look at my tobacco example, then make up your own mind if adding the “Jenny shoots Botox on her face” personal attack is warranted or not, ad hominem or not!
Secondly, my position has been straw manned a little bit, I’d like to believe unintentionally. Never did I say, or imply I believe, in our exchange that in communicating to or with the public “we can ignore ethos and pathos, and argue on logos alone”, and if something I have said may have come across that way, well then I take this chance to publicly clarify that this is not what I stand for.
I may have been wrong in classifying the Jenny-Botox attack as an ad hominem (which I am not convinced of yet for the record), but my main point was that we should not allow ourselves to be sloppy thinkers, that we shouldn’t fail to cast a critical eye on our own arguments to ensure that we are not committing the same mistakes that we accuse the “other side” of committing, that the end does not justify the means so to speak. How one jumps from that to arguing on logos only, I do not comprehend.
Of course, facts and statistics are dry and fail, on their own, to be very convincing to the general public; of course we need passion and the use of rhetoric, and emotions when discussing or debating these issues in public. I think that those are absolutely necessary to win in the court of public opinion, but that does not translate that therefore every rhetoric tactic is fair game, that every emotion is fair game, because I happen to believe, or to have come to the conclusion (whichever way you like to phrase it), that some don’t work as well as we think they do, at least from what I have heard psychologists say about human communication. But I do not intend to turn this into a “what’s the most effective way of communicating” thing, because I don’t think I can add anything besides personal experience to the debate, and we skeptics know how personal anecdotes can lead us astray.
The take home point here is that just because I am advocating against the use of ad hominems/personal attacks does not logically lead to the conclusion that therefore I am advocating for “arguing on logos” only. Are ridicule & sarcasm all there is for us to draw on? What about empathy? I don’t hear Tom making the case for expressing empathy anywhere in his defense of arguing with pathos.
The last point I want to make revolves about something Tom said in regards to the Ad Hominem. He maintains that if all you said is that Jenny is against toxins but she uses Botox, that would be an ad hominem, and we both seem to agree there. Then he also says that because we have other valid arguments to counter her toxins nonsense, the Botox thing no longer is an ad hominem, but it is demoted, so to speak, to a simple personal attack status. At least that’s how I understand his argument, I hope I’m not setting up a straw man here; I’d hate to do that, but my only comment is this: a logical fallacy is a logical fallacy, regardless if it is preceded, or immediately followed, by any number of valid arguments. In other words a rotten apple in a basket full of good apples, is still a rotten apple. Now, I am not a philosopher, and that may be a naive view, and I am willing to defer to the expertise of professional philosophers on this issue, but until then, this is what I think.
So, not being a philosopher by training, I have to say that I have but a layman’s understanding of the Ad Hominem. To my understanding it goes like this:
- Person makes claim X
- We point out something about the person (unrelated to X)
- We reject X based on 2
Now I am sure that there must be subtle variations and such, but the bottom line is we reject an argument someone makes based on some quality of the person, without really addressing the argument. So in Jenny’s case we have the following:
- Jenny argues that vaccines have toxins that are dangerous to children.
- We point out that Jenny uses Botox on herself
So where does that leave us? Well as both Tom and I have said, it depends on the context. The 3rd requirement for the Ad Hominem (therefore Jenny is wrong about toxins in vaccines) hasn’t been said in actual words. I maintain that when 1 & 2 are used together they imply, regardless of what the author may or may not desire to imply, that Jenny is wrong about 1, in certain cases based on the context. I specifically linked to a blog entry by Phil Plait that used the whole Jenny-Botox thing that I thought was a case where the implication was there hovering in the air, even if Phil may not have meant it that way. Go, read that entry yourself, and decide if I am right or wrong.
Now can the fact that she uses Botox be used in an argument in ways which would make it not an Ad Hominem? Sure it can, and Tom formulates examples himself, which are not the formulations I’m having an issue with anyway, but as he himself says it is still a personal attack, which adds nothing to the conversation. It is a poor tool to use in a public debate anyway (she can easily counter with “what does that have to do with vaccines? So I am misinformed about Botox, but I’m not here to discuss Botox, which is used by adults, but the health and safety of our children; so stop attacking my personal life style choices.“) in which case you’ve already lost the public opinion war, and you will be perceived as an arrogant person trying to belittle a mom who’s fighting for her son’s, and other children’s, wellbeing. Try explaining then that what you did was not an ad hominem attack.
If you think that facts and statistics are too dry, do you think that discussing in detail what is and what isn’t, philosophically speaking, a proper ad-hominem, would be more…wet for lack of proper terms, if what you’re trying to accomplish is to win the public’s hearts and minds?
So the CFI has joined the discussion about the non-mosque not on Ground Zero with a press release, a very worrying to me as a secularist, humanist and skeptic, misguided press release.
The Center for Inquiry is troubled by the rhetoric of some of those protesting the proposed Islamic religious center and mosque near Ground Zero, and it especially deplores the growing politicization of the dispute.
That’s good actually; I am worried myself about the tone and the nonsense rhetoric being thrown around by those opposed to the non-mosque.
CFI also holds that the focus of the protests is too narrow; it would be inappropriate to build any new house of worship in the area immediately around Ground Zero, not just mosques.
What? CFI is worried by the rhetoric, because it is too narrow and it is only focused on Islam?
“The 9/11 attacks were an example of faith-based terrorism, and any institution that privileges faith above reason is an affront to those who were killed and injured in those attacks,” observes Ronald A. Lindsay, president and CEO of CFI.
Oh Ron, Ron Ron Ron Ron Ron! Fox News is appalled because “The Muslims” want to have their own center near ground zero and you’re appalled because “The Religious” in general want to do that? And can you please tell us what is the appropriate radius around Ground Zero where religious expression of any sort shouldn’t take place because it would affront the victims and families of 9/11? Yes, Ron please specify the radius around Ground Zero where you think we should ignore the Constitution of the United States of America.
CFI fully supports the free exercise of religion; protecting the rights of believers and nonbelievers is central to CFI’s mission. Accordingly, CFI endorses President Obama’s recent statement reminding the country that Muslim Americans enjoy the same rights as other Americans and should not be treated as second-class citizens.
Except for a radius, to be specified by CFI, around Ground Zero that is. Way to support the guilt-by-association fallacious way of thinking Ron. See CFI cannot have its cake and eat it too; you cannot rely on the Constitution to fight creationism from creeping into our schools without accepting that the same Amendment of the Constitution demands the right of free exercise of religion be granted to people of faith. Doing that would be hypocritical and we all know, or should know, that hypocrisy has not room in rational inquiry.
UPDATE 08/29/10 – The CFI has issued an updated statement which supersedes the previous one. Here is the full text of the new, improved, statement.
The Center for Inquiry’s Statement on the Ground Zero Controversy
CFI fully supports the free exercise of religion; protecting the rights of believers and nonbelievers is central to CFI’s mission. Accordingly, CFI endorses President Obama’s recent statement reminding the country that Muslim Americans enjoy the same rights as other Americans and should not be treated as second-class citizens. There should be no legal impediment to the placement of an Islamic community center near Ground Zero, just as there should be no legal impediment to the placement of a church, temple, or synagogue near Ground Zero.
Further, CFI laments the effort by some to turn the proposed Islamic center into a political issue. Government officials and candidates for office should not intervene in disputes over the alleged offensiveness of a place of worship. Such conduct violates the spirit, if not the letter, of the Establishment Clause. Government officials should not be deciding who is a “moderate” Muslim any more than they should be deciding who is a “moderate” Christian or Jew.
A number of private individuals have protested the proposed Islamic center. The tone and substance of these protests covers a wide range. Some protesting the Islamic center have raised legitimate questions, but to the extent the objections to the Islamic center mistakenly equate all Muslims with Muslim extremists, CFI condemns them.
CFI maintains that an Islamic center, including a mosque, near Ground Zero, in and of itself, is no different than a church, temple, or synagogue. It is undeniable that the 9/11 terrorists were inspired by their understanding of Islam, and that currently there are far more Islamic terrorists in the world than terrorists of other faiths, but those facts are not relevant to the location of the Islamic center, absent evidence that terrorists are involved in this endeavor, and there is no such evidence.
CFI’s unequivocal support for the legal right of Muslims to place a community center near Ground Zero does not imply that CFI views the new center as an event to be celebrated. To the contrary, CFI is committed to the position that reason and science, not faith, are needed to address and resolve humanity’s problems. All religions share a fundamental flaw: they reflect a mistaken understanding of reality. On balance, CFI does not consider houses of worship to be beneficial to humanity, whether they are built at Ground Zero or elsewhere.
This statement supersedes any prior statement issued by CFI regarding the Ground Zero controversy.
Much better, that’s what the first one should have read like in the first place.
Guest post by Kyle Tuttle
Fad diets are nothing new; they’ve been around for ages. And the reason they’re fads is that most people soon realize they don’t work and stop using them just as quickly as they started. Unfortunately, there’s always another fad diet waiting in the wings.
The typical fad diet falls into one of (or a combination of) the following three categories:
- The virtue of a particular food or food group is exaggerated and purported to cure specific diseases, and is therefore incorporated as a primary constituent of an individual’s diet.
- Foods are eliminated from an individual’s diet because they are viewed as harmful.
- An emphasis is placed on eating certain foods to express a particular lifestyle.
The human body requires a base level of vitamins, minerals, and nutrients to grow and function properly, and fad diets often disrupt this nutritional balance. The impact of this disruption can range from mild to devastating. In the case of developing children, the effects of malnutrition can be especially severe.
Two popular fad diets have been shown to be particularly harmful to young children:
- Vegan diets. Vegans avoid foods made from animal products, including meat, eggs and dairy — each a natural source of the proteins, fats and vitamins (particularly B12) crucial to infant development. While advocates of vegan diets do often recommend mother’s breast milk as the optimal diet for children under the age of one, it’s rare to hear them acknowledge that infants fed only breast milk can still be malnourished if the mother follows a strict vegan diet.
- Macrobiotic diets. These restrictive diets get progressively more limited as one gets older. Grain is the staple of a macrobiotic diet, present in disproportionately high levels, and at the expense of meat and dairy — the latter of which (as mentioned) is especially important in infants. In fact, scientific studies have shown a high prevalence of rickets and an increased risk of vitamin B-12 and iron deficiency in infants on macrobiotic diets.
While malnutrition is harmful at any age, it is particularly catastrophic for young children in their formative stages. An infant’s nutritional needs are distinctly different from an adult’s:
- A deficiency of vitamin D and calcium can lead to rickets – characterized by dental deformities, decreased muscle tone, and softening of the bones, which can lead to skeletal deformities, including a misshapen head and bowlegs, among others.
- A deficiency of B vitamins carries a whole host of malnutrition nightmares. For example, Vitamin B12 deficiency can cause permanent damage to the brain and nervous system.
- Due to the extensive growth and myelination of their nerve cells, children under the age two children require very high levels of dietary fat. About 50% of their overall calories should come from high-fat sources.
Increasingly, it’s becoming clear that subjecting an infant or young child to fad diets or cult diets that disregard established nutritional guidelines isn’t just irresponsible, but is in fact a form of child abuse. Consider the following cases, where parents were charged with intentionally harming their children through the use of overly-restrictive fad diets:
- Jade Sanders and Lamont Thomas were each sentenced to life in prison for the death of their 6-week old son, Crown Shakur. The infant was fed a diet consisting almost entirely of soy milk and organic apple juice, and weighed just 3 1/2 pounds when he died.
- Joseph and Silva Swinton were convicted of first degree assault after nearly starving their infant daughter, Ilce, to death on a strict vegan diet. At 15 months old, Ilce suffered from rickets, broken bones, internal injuries and suspected neurological damage.
- Joseph and Lamoy Andressohn were acquitted of aggravated manslaughter but convicted of four counts of child neglect when their 6-month old daughter, Woyah, died after being fed a diet of raw fruits and vegetables. The child neglect charges stemmed from the condition of their surviving children, each of whom was severely malnourished.
Clearly, these are extreme cases, but they illustrate how dangerous fad diets can be when enforced on young children who have very different nutritional requirements from adults. Without intervention, a child can suffer permanent physical or mental damage, or even death. If an adult prefers to eat a vegan diet to protect animals, that’s their choice and their right. But when they have a child, perhaps that’s the animal they should be saving first.
This guest post was contributed by Kyle Tuttle, whose writing focuses on helping students find the right psychology degree. He can be reached at tuttletr33 at gmail dot com.
Skeptics and parallel rationalist communities spend a lot of time on “inside baseball” — jargon-filled debates about technical matters that seem incomprehensible, dull, or ridiculous to outsiders. These shouldn’t be the main skeptical topics (shouldn’t we be busy solving mysteries and educating the public?) but some discussion on these matters is unavoidable and worthwhile. Many movement-oriented skeptics and organizations have things they hope to accomplish; with goals, there comes discussion of best practices.
Among these insider debates, none is more persistent than that of “tone.” Hardly a week goes by that some tone-related tempest doesn’t spill out of its teacup and across the blogosphere. And yet, these issues matter to many (including me). When people devote enormous energy to skepticism, dedicate careers to skeptical outreach, or generously commit volunteer hours or donations to skeptical projects and organizations, it’s natural that abstract internal debates about the soul of skepticism are perceived to have powerful importance.
The passions of many have been swept up in the ongoing scrap about Phil Plait’s “Don’t Be a Dick” speech at the James Randi Educational Foundation’s “Amazing Meeting 8″ conference in Las Vegas. The skeptical blogosphere began buzzing even as Plait delivered the speech, and hasn’t yet stopped. The debate has reached a new level of feverishness in recent days, after Plait posted the entire video of the speech online. (If you haven’t seen it, it’s a powerful speech which is well worth your time.)
The flood of reactions — many hundreds of lengthy comments, dozens of blog posts and a teeming ecosystem of competing tweets — seem to have broken down along two main axes of debate. One axis defends (or challenges) Plait’s factual assertion that civility tends to help skeptical communication, while incivility tends to hinder it. The other axis concerns moral values.
Talking Past Each Other
The empirical dispute about the effectiveness of civility has sometimes devolved to a clash of straw men. As PZ Myers responded,
It’s a little annoying. Everybody seems to imagine that if Granny says “Bless you!” after I sneeze, I punch her in the nose, and they’re all busy dichotomizing the skeptical community into the nice, helpful, sweet people who don’t rock the boat and the awful, horrible, bastards in hobnailed boots who stomp on small children in Sunday school.
I can relate. I’m similarly exasperated when it is suggested that “nice” skeptics are trying to enforce uniformity; or it is imagined that Phil’s speech was secretly “yet another attempt to erect a skepticism-free barrier around theistic beliefs”; or it is supposed that anyone wants to take anger and passion out of the skeptics toolbox; or, even, argued that “nice” skeptics want to “go with the flow, to pretend that a thousand issues, whether it’s homeopathy or religion or transcendental meditation or an absence of critical thinking or a lack of concern about our health, are OK because they make people happy.” Where does this stuff even come from?
All this noise conceals a non-trivial amount of consensus. In general, everyone actually agrees that passion, anger, comedy, and ridicule can be useful in the right context, when used carefully and well. Everyone agrees that face to face conversations are best conducted with kindness and respect. Everyone (PZ included) agrees that fact-based, collegial discourse is often-but-not-always the best outreach strategy. (Consider PZ’s stated position: “I think the best ideas involve a combination of willingness to listen and politely engage, and a forthright core of assertiveness and confrontation — tactical dickishness, if you want to call it that.” To me, this sounds surprisingly similar to Plait’s “Don’t Be a Dick” argument: “Anger is a very potent weapon, and we need that weapon, but we need to be excruciatingly careful how we use it.”)
In other places, the effectiveness debate has bogged down in red herrings. For example, Richard Dawkins complained that
Plait naively presumed, throughout his lecture, that the person we are ridiculing is the one we are trying to convert. …when I employ ridicule against the arguments of a young earth creationist, I am almost never trying to convert the YEC himself. … I am trying to influence all the third parties listening in, or reading my books. I am amazed at Plait’s naivety in overlooking that and treating it as obvious that our goal is to convert the target of our ridicule.
This is a serious misreading of Plait’s intent, and I think rather baffling. Phil Plait is an experienced public figure, a career science communicator. Of course he knows (as I know, and as Dawkins knows) that our largest and best opportunity for outreach is often the wider audience of third-party onlookers.
Indeed, the audience of onlookers are exactly where the empirical question matters most.
ANSA) – Rome, August 3 – Italy’s antitrust watchdog said on Tuesday that it had opened a probe into the companies which distribute the Power Balance wristband, which has become this summer’s fad.
The probe by the authority will focus on “possible inappropriate commercial practices” by the companies which sell the wrist band that promises to give “balance, strength and flexibility”.
The focus of the probe will be whether the consumer risks being misled by properties attributed to the product.
The probe will center on two companies: Power Balance Italy, which distributes the Power Balance brand in Italy; and Sport Town, a company which retails the product.
The two companies will have 15 days to submit medical/scientific evidence dealing with the composition of the product and its effects on the human body as well as information dealing with any possible side effects in regard to the consumer’s health and safety .
Neither company has at present issued any statement about the probe.
According to the producer’s website, Power Balance is “performance technology designed to work with your body’s natural energy field”.
SKEPDUDE SAYS – I’m certain that they can give them 15 weeks and they will still fail to come up with any scientific evidence. I suspect they’ll offer a lot of celebrity testimonials, and some Applied Kinesiology videos as evidence.
I’m no fan of Jenny McCarthy, especially given her anti-vaccination views. I think that most of her arguments are invalid; she insists on perpetuating long debunked myths about vaccines, and seems to refuse to look at the actual evidence regarding vaccines. For that she needs to be criticized as much as we, politely but strongly, can. Nevertheless, it troubles me to witness ad hominem attacks, and the use of logical fallacies against McCarthy. One such argument that seems to have gained a bit of popularity these days goes along these lines:
Jenny McCarthy speaks of dangerous “toxins” in vaccines, yet she gets Botox shots, which include botulinum, one of the most toxic substances around, right on her face.
Unfortunately, even the one who is recently threatening to become my favorite active skeptic around (James Randi of course is on a category of his own, I’m talking mere mortals here), the Bad Astronomer himself made a similar comment at his Bad Astronomy blog.
I see. So injecting kids with scientifically-proven medicine that can save their lives and the lives of countless others is bad because of a fantasy-driven belief that it causes autism, but injecting a lethal pathogen — in fact, the most lethal protein known — into your face to help ease the globally threatening scourge of crow’s feet is just fine and dandy.
I’ve also heard a similar comment being made in an episode of The Skeptics Guide to the Universe podcast, fairly recently.
Now, as satisfying as taking shots to people we whole-heartedly disagree with may be, I fail to see what the above comment adds to the vaccine discourse. Jenny McCarthy is wrong because of what she’s choosing to consider evidence, and due to poor critical thinking about the issue at hand, not because of her personal, adult live-style choices. Think about it; it is a non-sequitur, it has nothing to do with the discussion at hand, and I’m not even sure what it is supposed to highlight about Jenny McCarthy herself.
If you are not convinced, let us do the usual experiment and replace the word “Botox/Toxin” with something else, smoking for example. Now let us assume for a second that teachers can smoke in the classrooms and McCarthy was advocating against smoke in the schools. Also assume she was a smoker herself and had said the following about cigarettes:
I love smoking, I absolutely love it,” she said. “I get it minimally, so I’m not a chain smoker. But I really do think it’s a savior, when I’m stressed and tired.
Now ask yourself: would her own personal love & consumption of tobacco, invalidate her arguments against smoking in schools? Of course not, and for the same reason her own personal use of Botox is not an argument against her anti-vaccine views. It is not related in any way; it is a non-sequitur and using it amounts to nothing more than an ad-hominem, or a poisoning-of-the-well, logical fallacy.
We skeptics take pride in our allegiance to logic and evidence; we are aware of our own shortcomings; we are aware that we are fallible and that we make mistakes. In my opinion the above comments about Jenny McCarthy are a mistake that we should own up to and make amends, and stop using it. If you really want to counter Jenny’s anti-vaccine views, choose one of the claims she makes, do some research, and write a nice blog entry showing where she goes wrong and what the evidence says, but do not resort to ad-hominem attacks. We are skeptics and we ought to be better than that.
This is insane, immoral, inhuman and I sure hope to goodness, illegal!
Doctors who face a shortage of anaesthetic drugs and expertise in war-torn Iraq have successfully used acupuncture techniques for Caesarean section deliveries, according to a new small study.
How the hell do you measure success when you’re cutting a woman’s belly open without anesthesia? What is the objective way of measuring the pain here? You know there is another word for this procedure before, it’s called torture! And what the hell does it mean to be a doctor who faces a shortage of expertise? Can someone explain that to me?
The researchers said that if their results were replicated in a larger study, such practices could be a useful addition to standard medical practice in fully equipped hospitals.
Oh let me get this straight, you want more pregnant women to have their bellies spliced open without anesthesia? Fuck no asshole. Come here, let me introduce you to my little friend: ETHICS!
The technique was used to counter the effects of halothane, which relaxes the womb, but carries an increased risk of bleeding as a result. Oxytocin is normally used to counteract these effects, but was in short supply at the time.
As soon as possible after delivery, six acupuncture needles were inserted into the mother’s toes and ankles and manually stimulated for five to ten minutes. The acupuncture points relate to bleeding from the womb, prolapse of the womb, difficult labour, uterine contractions and retention of the placenta.
What? You just said it was used by doctors facing “a shortage of anesthetic drugs” and now you’re saying it was used after delivery to control bleeding? Would you make up your fucking mind already. Actually this is good news; it means whoever wrote this piece of garbage is either trying to be funny or a complete imbecile, which carries with it a ray of hope that this whole thing just may not be true, that doctors are not using acupuncture instead of anesthesia anywhere in the world.
As skeptics, we take pride in our allegiance to evidence; we take pride in applying the skeptical method to various claims in order to figure out if there is any truth behind the claim or not. “Be skeptical; look at the evidence, defer to scientific consensus; look it up for yourself” are usual phrases that we throw around. Yet, the question must be asked: how realistic are those tenets? How honest is it to claim that, for every position we take in our skeptical activities, we’ve done the research? That we’ve found out what the scientific consensus is? That we’ve looked it up, ourselves?
This latest rambling is inspired thanks to a tweet by Daniel Loxton who pointed to a comment on, what else, a commentary on Phil Plait’s now famous, DBaD (a.k.a. Don’t be a Dick) TAM8 speech. Here is the comment by Red Pill Junkie, in its entirety:
Another thing I liked about Phil’s speech was in his telling the anecdote of how he chose to argue with a young Creationist; when she wanted to discuss things about dinosaurs and evolution, he quickly admitted he is not a Biologist, and hence wasn’t qualified enough to give her the answers to such questions.
That is an important message. Obviously a person with such a passion for science like Phil is perfectly entitled to have a layman’s opinion on fields that stand aside of his particular expertise; people should have many fields of interest, not just the stuff you studied as an undergraduate —Hell, that’s why you’re here reading this, ain’t it? :)
But one of the main problems with skeptics as a “movement”, is that the moment they acquire the term —and the methods of acquiring vary greatly from person to person, although more fall into simply “not believing in God, aliens and fairies” and be (very) vocal about it— they tend to erect themselves as experts in *EVERYTHING*; they feel entitled to give an “expert” skeptic opinion about everything they come across —UFOs, ghosts, Atlantis, reincarnation, 9/11, etc etc.
But this is not just their fault, since the Larry Kings of the media world always love to use the age-old formula of inviting an expert in some paranormal field —someone like Stan Friedman, who has spent decades researching the UFO phenomenon— and then inviting another “expert”: an official skeptic. The results are often …disastrous.
So yeah: part of not-being a dick is admitting you don’t have a diploma in Everything-ology ;)
It is important to pay special attention to that last sentence. No one is an expert in Everything-ology. It is simply impossible for any skeptic to have the time, or resources, to do an exhaustive search into every claim that we as skeptics express opinions on. Think about this for a moment: how readily do skeptical activists jump on any claim involving ghosts hauntings? How, quickly do we pull out the staple explanations to explain away that haunted house? Yet, how many of us have gone on just one haunted house investigation? The answer, I suspect, will be that not many of us, myself included, have taken part in such an activity.
Let’s look at something like global warming. How many of us have read at least a substantial portion of the science about global warming? Again, I suspect the answer will be that only a small fraction of us have. When we go around proclaiming that the scientific consensus supports the view, are we really basing that on our survey of the science, or are we basing that on what we heard some acclaimed skeptic say in her podcast, or write in his blog? Honestly, how many of us have read the IPCC synthesis report, all 52 pages of it?
Now, I’m not writing all this to belittle grassroots skeptics; I am myself one. The point is that, as the comment above says, we have to be very careful to first have it clear in our head, and also to make it clear to whoever we’re talking to, that in most cases what we’re expressing is an opinion, and that most of us are not an authority in any sense of the word about most things we’re expressing such opinions about. We have to know our limitations, and knowing our limitations doesn’t necessarily mean that we ought not to form or express opinions, but it does mean that we have to be more flexible than the believer in the opinions we hold. We have to know that we are fallible, that we are most likely forming an opinion based on incomplete information; that we are utilizing an argument from authority when we’re repeating arguments heard on a podcast, or read on a blog without taking the time to “check it out for ourselves”.
Checking it out for oneself is impossible to apply to everything, so we have to rely on others; we have to rely on Joe Nickell’s expertise when it comes to investigating haunted houses; we have to rely on the IPCCs expertise when it comes to summarizing climate science, but we do so with a grain of salt, beacause we did not do the skeptical thing and check these things out for ourselves. And that grain of salt must grow, the further away the commenter, on whose words we’re basing our opinion, moves from his/her area of expertise. That is why the grain of salt would be small when relying on the IPCC report, bigger when relying on Phil Plait’s comments on evolution, and even bigger if you’re relying on my comments about vaccines at my spanking new, and wonderfully informative vaccination blog, Vaccine Central.
A little while ago, I was listening to The Skeptics Guide to the Universe podcast, when Steven Novella mentioned that he’d been on the Skeptiko podcast debating Near Death Experience research with the host, Alex Tsakiris. I subscribed to Skeptiko to hear the debate. My initial reaction was that Alex was trying to honestly evaluate the evidence. However, the way he was interpreting it, was unsatisfactory to my skeptical mind. Thus, I decided to listen to a couple other episodes to see if my initial interpretation was correct.
In the next episode, Alex had as guests the hosts of a skeptical podcast I wasn’t aware of, called Righteous Indignation, and one the main thing that the 4 of them spend a lot of time discussing was a study about mediums and communication with the dead. The study is titled “Anomalous Information Reception by Research Mediums Demonstrated Using a Novel Triple-Blind Protocol” by Julie Beischel and Gary E. Schwartz. I have sent Alex an e-mail to ensure that this is in fact the study in question. He has replied confirming that this is indeed the study they were discussing in that show.
Alex took exception to the skepticss comment that the study’s methodology was questionable. After reading the study myself I find myself agreeing, not surprisingly, with the skeptics. This study has glaring issues, and leaves too many important pieces of information out. I tried to reach out via e-mail to the study’s author, Julie Beischel to ask her a few questions, but the e-mail address listed on the study came back with an error message. Unfortunately, those questions will remain unanswered.
So without further ado let me get into the meat of things.
The study’s purpose was to investigate the “anomalous reception of information about deceased individuals by research mediums under experimental conditions that eliminate conventional explanations.” In other words, the authors wanted to set up conditions which made it impossible for the mediums to get information in any way besides “anomalous reception”, a.k.a. psychically, and then figure out the success rate.
8 students were selected, 4 with a deceased parent, 4 a deceased peer. Each student was paired with a student from the other group, thus each pair of 2 students had one deceased parent and one deceased peer, both deceased individuals of the same gender, resulting in 4 pairs of “sitters”. An unrelated third person, who had no knowledge of the sitters or the dead people served as a “proxy sitter”. In other words, the proxy sitter was given the names of the 2 dead people, which he/she then relayed to the medium over the phone. The medium, working solely with the first name of the dead person would then go on to produce a reading for the pair (2 readings per medium one for the dead parent, one for the dead peer). Each pair of sitters received readings from 2 separate mediums.
So to summarize, 8 students organized in 4 pairs. 8 mediums. Each pair got reading from 2 mediums. We have 16 readings altogether. Next comes the scoring.
I’m not going to spend much time on the technicalities of the scoring process. For purposes of the summary it suffices to say that each student was presented with the readings for the pair and asked to choose the one that better fit their deceased person. So if you were the student with the dead parent, you’d get two readings : the one meant for your dead parent and the one meant for the dead peer of the other student in your pair. You would not know which was which and had to pick the one that best fit your dead parent. After doing this 13 out of the 16 readings were correctly identified.
The authors concluded with strong words:
The present findings provide evidence for anomalous information reception but do not directly address what parapsychological mechanisms are involved in that reception. In and of themselves, the data cannot distinguish among hypotheses suchm as (a) survival of consciousness (the continued existence, separate from the body, of an individual’s consciousness or personality after physical death) and (b) mind reading (ESP or telepathy14)or super-psi1 (retrieval of information via a generalized psychic information channel or physical quantum field, also called super-ESP).
So what is the verdict here? Does this study really provide convincing evidence for anomalous reception?
Basic Criteria for evaluating a scientific paper
Before we start analyzing how well, or not, this study followed basic methodological principles, it is important, I think, to review the basic characteristics that we expect to see in a well designed and run scientific study, and they are:
- No fraud – This one is pretty obvious; the very first requirement is that there was no fraud perpetrated by the authors, no hiding of data, no making up data and that sort of stuff.
- Statistical competency – We would also expect the authors to have done their statistics properly, that the correct analytical techniques were used and such.
- Sample Size – This refers to the number of people drafted to participate in a study. For any given level of statistical confidence interval, a minimum sample size (referred to as n in statistics) is necessary. The smaller the sample the less reliable the results of the study are. Sample size is directly related to the total population for which we’re trying to come to a conclusion, the confidence level and the confidence interval. For a quick calculator and a quick refresher of what these terms mean, you can check out this website.
- Randomization – of test subjects is important because it helps to reduce the effect of bias in the study results.
- Control Group – Very important to weed out perceived, but not real, effects/benefits from whatever is being studied. Thus, when testing a drug, there will be one group of test subjects receiving the medicine being studied, and another group, separate and distinct from the first, receiving a sugar pill. Neither knows what they’re being administered. The results from the control group are compared with the results from the medicine group to see if there is a real effect, beyond placebo.
- Blinding – Single/Double/Triple. Blinding comes in many flavors. The gold standard is double-blinding, when neither the test subject, nor the person administering the thing being tested know what they are dealing with. Triple blinding is also possible, when the people doing the statistical analysis of the raw data are not told which one they’re analyzing. So for example, in the drug scenario double blinding means the test subject does not know if he’s getting the medicine or the sugar pill, the person handing out the pills does not know if she’s handing out the medicine or the sugar pill. In the triple blinding case, the statistician would not be told “here is the data for the medicine and here is the data for the sugar pill”. Instead, she’ll be told “here is data set A and here is data set B”.
These are the core, basic requirements of a properly designed scientific study. Now going back to the study at hand, the skeptics claimed that the methodology, a fancy way of saying the design, of the study was inappropriate, “highly dubious” I believe were the exact terms, if my memory serves correctly. Let us go through the list and see if that is indeed the case, or if Alex was right that this study has very good design. Only one of them can be right, so let us try to find out who is.
I will skip over #1 and #2 and give the study authors a “Pass” for the simple reason that I am not aware of any evidence that there was any fraud, so unless such evidence comes to light I am inclined to believe no fraud was present, and because I am not an expert in statistics, I cannot scrutinize the statistical methods and results so I am willing to give this study the benefit of the doubt in that regard as well. Let’s look at the other criteria, those that any lay person can evaluate for themselves.
#1 Sample Size – Was the sample size appropriate? Well what is the sample size in this study? Is it the number of students recruited? The number of mediums? Well, given that what is being studied here is not the effect of the reading on the sitter, but the effectiveness of the medium to give a correct reading, I would suggest that the sample here would be the total number of readings performed, thus n=16. Is this sample size appropriate. No, not to enable us to reach any conclusions whatsoever. Even if everything else is done perfectly, all the other criteria were followed to the dot, a sample size of 16, at best, indicates that a larger sample is needed. No conclusions can be drawn from 16 data points.
You do not have to take just my word for it. Let us refer to the calculator I linked to before. How can we apply it to this case? Simple: the study concludes that 13/16 readings were picked up correctly, therefore that is strong evidence for psychic powers, or anomalous reception of information. The unstated premise is that those 13 readings must have been on target. So we can look at the number of readings. According to this study, 13 out of 16 medium readings were correct, which would be impressive. However, let us think for a moment: how many such readings take place, in the US alone in any given year? I would venture a guess of something in the hundreds of thousands. Let us say for argument’s sake that we have a population of only 100,000 readings.
Now we ask the question, what number of readings do we have to study in order for the sample size to be appropriate? That depends on the desired confidence level and interval. No study I’ve ever read has had a confidence level of less than 95%, and if I am not mistaken, this study is using a 99.9% confidence level, but for argument’s sake we’re going to use the lower level of 95%, which will require a smaller sample size. The interval is the + or – that usually follows poll results. I’ve usually seen a few digits, so let us go with 5. Please type all this information in the calculator:
- Confidence Level – 95%
- Confidence Interval – 5
- Population – 100,000
The result? 383. In other words, you’d need to look at 383 readings to be 95% sure that the result is within 5% of the true value. All of a sudden 16 looks really, really tiny, doesn’t it? Strike One!
#2 Randomization – Were the test subjects chosen at random? No, neither the sitters nor the mediums were chosen at random from their respective populations. While I do see why that would be so with the mediums, you want to test the best of the best after all if you want to sort this thing out and you don’t want the charlatans in the medium population to dilute the effect, I do not understand why this simple requirement was not followed when it came to the sitters. The authors had a pool of 1,600 students to choose from, more than enough to get a nice, random sample out of. Instead the sitters were selected based on answers “yes” or “unsure” to questions about his/her belief in the afterlife and mediums. Furthermore, the final 8 were chosen based on their answers and based on the desired paring, in order to optimize “the ability of blinded raters to differentiate between two gender-matched readings during scoring”.
What does all this mean? Well, simply put it means that the authors hand-picked who they wanted to be a sitter based on the survey questions, and even went so far as making sure that the paired deceased were as different from each other as possible. That basically takes randomization and throws it out the window, no questions asked.
So what exactly were these survey questions the volunteers had to answer? What were the answers of the final 8? We do not know, and unfortunately Dr. Beischel’s e-mail did not work so I could not ask these questions. But these are crucial pieces of information to have. What if all 8 had answered “Yes” to the question “do you believe in an afterlife” or “do you believe in mediums and their ability to contact the dead”? Wouldn’t you think that would severely bias the way they look at the readings? Strike Two!
#3 Control Group – This was a sticking point between the skeptics and Alex in the podcast. Alex kept insisting that there was a control, that the fact that each person got their intended reading and another reading constituted a control. However, he’s missing the main point about controls: it is supposed to be a control group, separate and distinct from the “treatment” group. The magnitude of the placebo effect, random chance etc. cannot be gauged by having the same test subject choose between treatment A and the placebo. That’s just not how science works, and if we are pretending to be running a scientific experiment we must play by the rules of science. You cannot make up a new definition for “control”; that’d be having your cake and eating it too!
So what would the control have looked like in this experiment? Sticking with the way this experiment was run, the control group would be a second group of 8 students, identical to the first 8 who would be getting the same readings but not from a “medium” but a mentalist that can produce such readings without claiming paranormal powers. Then you would run the exact same experiment and tally the results. If there is a statistically significant difference between the first group of 8 students and the control group of another 8 students, then one may reasonably say that more study is needed. This study as run, lacked a control group. Strike Three!
#4 Blinding – Is this really a triple blinded study as the authors proclaim? Well remember triple blinding means that the participants are blinded (meaning they don’t know if they are getting the real or the control treatment), the person handing out the treatment does not know what they’re giving out, and the statisticians analyzing the results do not know what they’re analyzing. This study fails on all three counts.
First the test subjects were not blinded, simply put because there was no control group. Every student knew they were getting a “real” reading indeed. You cannot have participant blinding without a control group, and having the test subject choose between a fake and a real reading does not constitute blinding, especially when the readings are set up to be as different as possible. That’s a basic fact and anyone who has a problem with that is not understanding control & blinding as they are used in science.
Second, the mediums were not blinded. In order to effectively blind the mediums they should not have known if the name they were given was indeed that of a dead person or that of a living one. Not only did the mediums work in complete confidence that they were working only with dead people, but they also knew the gender and approximate age of the dead people they were supposed to give a reading for. That is not blinding, that is the opposite of blinding, the medium is going in knowing three pieces of information: the person is indeed dead (so no chance of giving a reading for a living person), the person’s gender (gleamed from the name) and the persons’ ages (roughly late teens to early twenties for the dead peer, and late 40s and higher for the dead parent). That is a lot of information for someone skilled at the guessing game. The way the experiment is set up, betrays one important thing: the author is going on about this study already assuming the mediums can indeed talk to the dead, so they didn’t even bother to control for the possibility of fraud, or guessing.
Thirdly, there is no indication in the paper about the blinding of the statisticians and the other persons involved in interpreting the data. The author refers to the proxy-sitters as their triple blinding but that is not what triple blinding means. Matter of fact, the presence of the proxy sitters is completely baffling. They do not need to be there, they add nothing to the overall methodology, and it seems their sole purpose was to pass on a name to the medium, which could have easily been done otherwise. Anyone who knows anything about triple blinding can easily confirm this is not triple blinding.
So the test subjects and the mediums were not properly blinded, and it appears the statisticians weren’t either. Strike Four!
Other problems with the study
Besides the methodological problems described above here are more problems that need to be worked out before we can have any reliance on the results of this study.
- There is no mention in here of how accurately the mediums readings matched with the descriptions of the deceased that the students gave in the beginning. Were there any specific pieces of information provided (such as the deceased’s birth date, death date, death place, mode of death, social security # etc, something that is specific to the person being “read”)?
- The participants were forced to choose one of the two readings provided. They were not asked to pick only if the reading very closely applied to their deceased person, they are forced to choose one of the two. When you combine this 50-50 pure chance, with the fact that the students were hand-picked to participate (possibly having been choses for their propensity to believe) and the fact that the two readings would have been fairly different (medium knew approximate ages) that can easily explain 13 out of 16 hits. The fact that we lack a control group makes that number almost useless as we have nothing to compare it to.
- When the students were chosen from the pool of 1,600 it was done so in order to “optimize testing conditions…based on answers “yes” or “unsure” to survey questions about his/her beliefs” yet no explanation of exactly what this optimization process involved.
- When the dead people were paired is was done in a way so to “optimize differences” across various characteristics. Again no description is given. When they say it was optimized for age does that mean to decrease or increase the age difference in the pair? The answer is unknown.
- The second part of the reading was the Life Questions in which the medium was to answer 4 specific questions about the dead person. The results on the accuracy of these answers are not available.
- Each medium reading was transcribed and turned into a numbered list of individual items. It is unknown how specific the items included in the list by the experimenter were. In other word, did it say “Bob died in a motorcycle crash on the I95” or does it say that “Bob died peacefully”? Those kinds of things always matter in a study of this sort.
We can get into more detail about other opened questions that remain and have not been properly addressed. In the Results the authors promise more details in a future manuscript, but I haven’t been able to find it, and as stated previously my attempt to contact the lead author was futile.
So what can we take from this study? How reliable is it? Unfortunately for the talking-to-the-dead enthusiasts, this study is worthless scientifically. It had a ridiculously small sample size, it lacked a control group, had no randomization or proper blinding, not in a scientific sense that is. There are many other unanswered questions, missing crucial information that could shed some light on the results. The authors forced the subjects to pick one of the two answers, which alone gives a 50-50 chance which when coupled with the other points I raised up earlier more than explains the results observed. And more importantly, nothing was reported on the accuracy of the mediums readings, how specific their readings were especially in the Life Questions sections and how well they matched with the subjects descriptions on specific items.
Would it not have been easier to ask the students to provide ten pieces of information specific to the dead person, ask the medium to do their reading, covering the 10 specific pieces of information and then ask a third-party to analyze how well the mediums’ answers match those specific pieces of information, as opposed to relying on forced choice between two options? I think so. Why wasn’t it done? Id’ rather not speculate.
No comment needed; his responses and attitude speak for themselves.