Skepfeeds-The Best Skeptical blogs of the day

Case study : Bad Studies

Posted in Skepdude by Skepdude on April 13, 2009

Homeopathy enthusiasts are pointing to yet another bad study as proof that homeopathy has a basis in science. This one is from March 2007 and is titled “Homeopathic and conventional treatment for acute respiratory and ear complaints: A comparative study on outcome in the primary care setting“. It purports to show that homeopathic remedies worked just as well as conventional remedies to treat “accute runny nose, sore throat, ear pain, sinus pain or cough”. You can read the details at the link above. What I want to concentrate here is on just how badly designed this study is. It seems to me as though it was set up specifically to produce the sort of answer that the homeopaths were after. Nevertheless let me present my arguments and you can make up your own mind.

Before we look at this specific study, let us go over the basic things to look at when reviewing any study. How well the study is designed and how well it conforms to certain rules has a profound effect on how much reliance we can put in its results. One way of assesing how good a study is, is what is known as the Jadad Scale. The Jadad scale is a simple checklist that helps you decide how well designed, therefore how trustworthy a clinical study is. It concentrates on the following areas: Randomizatin, Double Bliding, Withdrawals and Drop Outs. Here is a typical way of calculating the Jadad Score.

Basic Jadad Score is assessed based on the answer to the following 5 questions.

The maximum score is 5.


Question Yes No

1. Was the study described as random? 1 0

2. Was the randomization scheme described and appropriate? 1 0

3. Was the study described as double-blind? 1 0

4. Was the method of double blinding appropriate? (Were both the patient and the assessor appropriately blinded?) 1 0

5. Was there a description of dropouts and withdrawals? 1 0

Quality Assessment Based on Jadad Score


Range of Score Quality

0–2 Low

3–5 High

So let us go over the homeopath’s study and see how it ranks based on the Jadad Score.

1-Was the study described as random? NO – 0 points (cumulative)

Methods

The study was designed as an international, multi-centre, comparative cohort study of non-randomised design.

Not only that , but the patients were asked which group they wanted to be in, homeopathy or conventional medicine (misspelling of the word enrollment is theirs not mine! Also emphasis is added by me).

Upon enrolment in the study, patients, or the patients’ legal guardians were asked for their treatment preference. In the homeopathy group, 81% of patients had a preference for homeopathy, 18% had no treatment preference. In the conventional group, 55% of the patients’ preferred conventional treatment, 2% homeopathy and 43% had no treatment preference.

Fun Fact – 81% of the patients in the homeopathy group had chosen homeopathy and the results from the homeopathy group were…drum roll….86.9% reported complete recovery. Can you say placebo?

2-Was the randomization scheme described and appropriate? There was none. – 0 points (cumulative)

Randomization is very important when setting up clinical studies. Not only is it important to randomize the patients, but also how you randomize them matters. Different methods of randomization rank higher than others. According to Wikipedia:

Randomisation is a process to remove potential distortion of statistical results arising from the manner in which the trial is conducted, in particular in the selection of subjects. Studies have indicated, for example, that nonrandomised trials are more likely to show a positive result for a new treatment than for an established conventional one.

I haven’t checked that claim on the last sentence, so take it with a grain of salt, even though it does make sense.

3-Was the study described as double-blind? No, there was no blinding whatsoever, doctors knew what treatement each patient was getting and patients knew it too (they got to choose remember) – 0 points (cumulative)

Since it was not possible to blind patients for their treatment, potential reporting bias from patient’s expectations may have influenced the outcome.

You think?

4-Was the method of double blinding appropriate? (Were both the patient and the assessor appropriately blinded?) There was no double blinding, there wasn’t even single blinding. – 0 points (cumulative)

This is where I would start worrying if I was trying to use this study to prove my point. We’re up to question 4 of 5 and they have 0 points!

5-Was there a description of dropouts and withdrawals? Not even close, they only mention that 6 people who got no treatment were dropped. – 0 points TOTAL!

This study ranks as possibly the worst designed study you could come up with. There was no randomization, no blinding of any sort, let alone double, no control group, in other words nothing that would lend it even a slight amount of legitimacy. The authors seem to have benn aware of this, for they make sure to make the following point: None of that stuff really matters, our study is good enough as it is! Notes in red are my comments.

Objective data collection and evaluation is needed to assist physicians in patient care and advance the quality of medical practice [2] This study will presumably be objective!. Clinical trials, especially randomised controlled trials (RCTs), are generally accepted as producing the highest level of evidence for medical interventions. I feel there’s a “but” coming! Driven by the discovery of new pharmaceutical substances, demands from regulatory authorities for clinical data and the need of physicians for evidence based treatment strategies, the methodology of RCTs became the subject of research itself. Within this context, the strengths and weaknesses of such trials have been debated [3]. Placebo-controlled RCTs are indispensable for the development of pharmaceutical agents with unknown efficacy and safety profiles Such as maybe homeopathic agents. On the other hand if the efficacy and safety of an agent is known why would one even bother to do a study?. Their limitations result from highly standardized study protocols and patient populations, which may create artificial situations that differ from daily practice Oh, I see they are more tightly controlled and have stricter requirements, and THAT makes them problematic. What? . Moreover, even the fact that patients are enrolled into a placebo-controlled clinical trial will influence treatment outcome, sometimes leading to high placebo or low verum response rates [4] Somehow I did not think it was a matter of high or low, I thought it was a matter of the truest measure which is the point of the control groups. Further, proper blinding should guarantee the truest results possible. Consequently, more practice-based studies have been developed such as pragmatic RCT’s or non-randomised cohort studies. In other words, when you can’t live up to these standards make up more lax standards and claim they are just as good. Pathetic! Especially non-interventional outcomes studies have only few inclusion and exclusion criteria. Therefore they may provide information about a broad and heterogenous patient population thus resulting in high external validity for daily medical practice Actually lack of controls will result in exactly the opposite, it will be useless for daily medical practice. It may provide a good gauge for people’s ability to deceive themselves though. However, the fact that patients are not randomly assigned to treatments in such outcome studies may lead to baseline differences between groups and makes the interpretation of the results more susceptible to bias. May? That’s putting it mildly! This disadvantage may be overcome, at least in part, by the application of statistical methods to control for baseline differences between treatment groups No it can’t, otherwise randomization would not be required, EVER. Good statistics can never make up for bad data. Statistics rely on the data itsel. The above claim makes no sense!

Fun Fact –

Apart from the ongoing discussion about clinical evidence, complementary therapies are well integrated into primary care in most Western countries

Yeah appart from the fact that CAM has not been shown to work, IT IS POPULAR. Good enough for me!

Conclusion

This study is horrendously designed. It lacks all of the basic requirements that every clinical trial should have, such as randomization, double blinding, control group etc.  Based on that fact alone, regardless of the sample size, regardless of how careful and precise the statistics, the results of such study will be completely unreliable. The data set is corrupted due to the lack of controls, as such it does not matter how carefully you analyze it, the result would be meaningless. Even if it had told us that homeopathy is useless, we would still have to ignore it. And ignore it, I, we and all the science based community will. Sorry homeopaths, you’re still stuck at 0. Good luck next time.

Advertisements

Acupuncture – Disconnected from Reality

Posted in Science Based Medicine by Skepdude on March 18, 2009

The primary goal of science-based medicine (SBM) is to connect the practice of medicine to the best currently available science. This is similar to evidence-based medicine (EBM), although we quibble about the relative roles of evidence vs prior plausibility. In a recent survey 86% of Americans said they thought that science education was “absolutely essential” or “very important” to the healthcare system. So there seems to be general agreement that science is a good way to determine which treatments are safe and work and which ones are not safe or don’t work.

The need for SBM also stems from an understanding of human frailty – there are a host of psychological effects and intellectual pitfalls that tend to lead us to wrong conclusions.  Even the smartest and best-meaning among us can be lead astray by the failure to recognize a subtle error in logic or perception. In fact, coming to a reliable conclusion is hard work, and is always a work in progress.

There are also huge pressures at work that value things other than just the most effective healthcare. Industry, for example, is often motivated by profit. Institutions and health care providers may be motivated by the desire for prestige in addition to profits. Insurance companies are motivated by cost savings. Everyone is motivated by a desire to have the best health possible – we all want treatments that work safely, often more so than the desire to be logical or consistent. And often personal or institutional ideology comes into play – we want health care to validate our belief systems.

These conflicting motives create a disconnect in the minds and behaviors of many people. They pay lip service to science-based medicine, but are good at making juicy rationalizations to justify what they want to be true rather than what the science supports. We all do this to some degree – but, in my opinion, complementary and alternative medicine (CAM) is a cultural institution that is built upon these rationalizations.  It is formalized illogic and anti-science conceals as science under a mountain of rationalizations.

Some recent news items and reports dealing with acupuncture demonstrate this disconnect quite well.

READ THE REST OF THIS ENTRY AT “SCIENCE BASED MEDICINE”

Consciousness, meditation and the Dalai Lama

Posted in Rationally Speaking by Skepdude on December 8, 2008

CLICK HERE TO GO TO THE ORIGINAL ENTRY AT “RATIONALLY SPEAKING”

Speaking of science and religion, I got significantly annoyed by a short piece in Nature magazine by Michael Bond (13 November 2008). Bond reviews two recent books on Buddhism and science: “Mind and Life: Discussions with the Dalai Lama on the Nature of Reality,” by Pier Luigi Luisi, and “Buddhism and Science: A Guide for the Perplexed,” by Donald S. Lopez.

I keep being baffled by the fact that so many scientists think it is a cool idea to engage in absurd fits of mental acrobatics so that one can claim that religion, after all, is not in contradiction with science, and in fact can even be somewhat helpful. Granted, Buddhism certainly doesn’t have the same attitude that, say, Christianity and Islam have about science, but there still is a lot of unnecessary fluff that gets thrown around in this misguided quest for a unity between science and religion.

For instance, Bond says that “science and Buddhism seem strangely compatible … [because] to a large degree, Buddhism is a study in human development.” No, it isn’t. Certainly not in the scientific sense of “study.” Buddhism, like all mystical traditions, is about introspection, notoriously a remarkably unreliable source of “evidence.” In that sense, Buddhism is much closer to some continental philosophical traditions based on phenomenology and first-person subjectivity than to science — the quintessential third-person approach to the study of natural phenomena.

Second, Bond contends, Buddhism has an energetic “champion of science” in the current Dalai Lama. That may very well be, but of course this wasn’t the case with past Lamas, nor is there any assurance that it will continue to be with the next one. This hardly seems grounds for claiming “strange compatibility.” True, the current DL has said that if science should ever find a notion endorsed by Buddhism to be not true “then Buddhism will have to change.” It certainly sounds a heck of a lot better than the usual nonsense coming from creationists and intelligent design proponents.

But a moment’s reflection will show that this is a pretty empty statement on the Lama’s part, as much as I don’t doubt that he really meant it. What sort of Buddhist concepts could possibly be proven wrong (or right) by science? Buddhism, again like all mystic traditions, phrases its teachings in such vague language that they are simply not amenable to rational, let alone strictly empirical, analysis. Are we one with the universe? Not really, unless one means that we are made of the same basic stuff as everything else, which I don’t think is what Buddhism means. And even if it meant something like that, to claim congruence with science leads to the same anachronism committed by people who say that the atomist philosophers of ancient Greece had “anticipated” the discoveries of modern physics. No, they didn’t, they were working out of metaphysical presuppositions, did not do any mathematical or experimental work, and most certainly didn’t mean what we do by the term “atom.”

Bond goes so far as to suggest that there is an area of research where Buddhism actually has achieved more than what science has produced so far: when it comes to studying consciousness, he says, Buddhism offers “a kind of science of introspection.” It’s worth quoting Bond in full here: “Whereas cognitive science’s best guess is that consciousness is an emergent property of neuronal organization, Buddhists see it at some pure subtle level as not contingent on matter at all, but deriving instead from ‘a previous continuum of consciousness” — the Dalai Lama’s words — that transcends death and has neither beginning nor end.”

Wow. Where to begin? How about with the observation that “a science of introspection” is an oxymoron? As I mentioned above, introspection is certainly a rich kind of experience that can be cultivated for one’s own edification, but it is not and cannot be “science” because science is based on the idea of independent verification of empirical findings. Second, that consciousness is an emergent property of neuronal organization is much more than a “guess,” as serious research in neurobiology has made stunning progress in identifying specific regions of the brain that provide the material basis for specific aspects of the conscious experience. And finally, what on earth is even remotely scientific about completely unfounded and even literally meaningless claims about a “continuum of consciousness”? Continuum means adjacent, to what would consciousness be adjacent, pray?

Look, Buddhists have all the rights to believe all the fluff they want, just like anyone else. And unlike fundamentalist Christians they at least don’t pretend to teach their mysticism in science classes. But why do religionists crave so much the recognition of science, beginning with creationists themselves? (After all, they talk about “creation science,” and “intelligent design theory.”) And why do some scientists lend credence to the Dalai Lama, the Pope, and whoever else invites them for a weekend in Rome or in Dharamsala? The best that can be said about science and religion is that they have nothing to do with each other, and most certainly nothing to teach to each other. Let’s not pretend otherwise for the sake of cultural correctness.

CLICK HERE TO GO TO THE ORIGINAL ENTRY AT “RATIONALLY SPEAKING”

Scientific bias and the void-of-course moon

Posted in Pharyngula by Skepdude on October 6, 2008

Stuart Buck persists in claiming that scientists have a bias against the supernatural, and that we dismiss it out of hand. This isn’t true; the problem is that supernatural explanations are poorly framed and typically unaddressable, so we tend to avoid them as unproductive. What one would actually find, if one took the trouble to discuss the ideas with a scientist, is that they are perfectly willing to consider peculiar possibilities if they are clearly stated. We’ll even briefly consider something as insane and worthless as astrology, which is even less credible as a field of study than Intelligent Design.

Here’s an example from years ago on Usenet, in the newsgroup sci.skeptic. An astrologer, Thomas Seers, was insisting that his weird little pseudoscience was a suitable topic for a science course. One of the skeptics, Robert Grumbine, politely asks him for specifics:

Robert Grumbine: Let us say that I teach astronomy.  Let us suppose I’ve decided to spend an hour on astrology.  What would my presentation be?  Keep in mind that this is a science class, so part of my job would be to discuss what experiments have been done that demonstrate that it works, as well as to describe how it works (in the sense of how the students could make the predictions themselves).

Thomas Seers: Hello Robert,
You appear to be asking a serious question, so I will give you an experiment to try.  This will also give you an insight to what Alchemists did years ago.
On 10/20/99 from 2 AM to 8 AM EDT, mix a bowl of jello and you will find it won’t jel. My basic students have this as a homework assigmnet to learn of a void-of-course Moon period. Silly thing, huh. It can be repeated over and over again. Don’t spill it now :-).

READ THE REST OF THIS ENTRY AT “PHARYNGULA”

How To Improve Science Education

Posted in Neurologica by Skepdude on September 5, 2008

The stated “mission” of the loosely defined “skeptical movement” is to promote science and reason. At the core of this mission is the promotion of life-long quality science education. The many blogs, podcasts, magazines, lectures, and books primarily serve this purpose – to popularize science and help teach scientific philosophy, methodology, and facts to the public.

But what about formal public science education? There appears to be general agreement among skeptics that the quality of science education is generally poor, and yet is critical to our goals. But what have we done about it? Too little, I think.

READ THE REST OF THIS ENTRY AT “NEUROLOGICA”

Skepquote of the day

Posted in Skepquote by Skepdude on August 1, 2008

This is the 21st century; it’s time to abandon wishful thinking and embrace rationality, despite the pain that some folks feel when forced to think. It’s encouraging to me to see that scientific and medical authorities, as well as the media, are actively moving to have the claims of the homeopaths properly – extensively and scientifically – evaluated, though those practitioners dread such investigations, and prefer to mumble that they’d like to remain adamant in their self-delusion. The notion that literally nothing – just some sort of mystical “memory vibration” – can have any effect on a patient, needs to be relegated away, along with phlogiston and blood-letting. The sole real effect of the interference of a homeopath appears to be that an ugly lump in the patient’s wallet is significantly reduced. (emphasis added)

James Randi, SWIFT July 31, 2008.