Homeopathy enthusiasts are pointing to yet another bad study as proof that homeopathy has a basis in science. This one is from March 2007 and is titled “Homeopathic and conventional treatment for acute respiratory and ear complaints: A comparative study on outcome in the primary care setting“. It purports to show that homeopathic remedies worked just as well as conventional remedies to treat “accute runny nose, sore throat, ear pain, sinus pain or cough”. You can read the details at the link above. What I want to concentrate here is on just how badly designed this study is. It seems to me as though it was set up specifically to produce the sort of answer that the homeopaths were after. Nevertheless let me present my arguments and you can make up your own mind.
Before we look at this specific study, let us go over the basic things to look at when reviewing any study. How well the study is designed and how well it conforms to certain rules has a profound effect on how much reliance we can put in its results. One way of assesing how good a study is, is what is known as the Jadad Scale. The Jadad scale is a simple checklist that helps you decide how well designed, therefore how trustworthy a clinical study is. It concentrates on the following areas: Randomizatin, Double Bliding, Withdrawals and Drop Outs. Here is a typical way of calculating the Jadad Score.
Basic Jadad Score is assessed based on the answer to the following 5 questions.
The maximum score is 5.
Question Yes No 1. Was the study described as random? 1 0 2. Was the randomization scheme described and appropriate? 1 0 3. Was the study described as double-blind? 1 0 4. Was the method of double blinding appropriate? (Were both the patient and the assessor appropriately blinded?) 1 0 5. Was there a description of dropouts and withdrawals? 1 0
Range of Score Quality 0–2 Low 3–5 High
So let us go over the homeopath’s study and see how it ranks based on the Jadad Score.
1-Was the study described as random? NO – 0 points (cumulative)
The study was designed as an international, multi-centre, comparative cohort study of non-randomised design.
Not only that , but the patients were asked which group they wanted to be in, homeopathy or conventional medicine (misspelling of the word enrollment is theirs not mine! Also emphasis is added by me).
Upon enrolment in the study, patients, or the patients’ legal guardians were asked for their treatment preference. In the homeopathy group, 81% of patients had a preference for homeopathy, 18% had no treatment preference. In the conventional group, 55% of the patients’ preferred conventional treatment, 2% homeopathy and 43% had no treatment preference.
Fun Fact – 81% of the patients in the homeopathy group had chosen homeopathy and the results from the homeopathy group were…drum roll….86.9% reported complete recovery. Can you say placebo?
2-Was the randomization scheme described and appropriate? There was none. – 0 points (cumulative)
Randomization is very important when setting up clinical studies. Not only is it important to randomize the patients, but also how you randomize them matters. Different methods of randomization rank higher than others. According to Wikipedia:
Randomisation is a process to remove potential distortion of statistical results arising from the manner in which the trial is conducted, in particular in the selection of subjects. Studies have indicated, for example, that nonrandomised trials are more likely to show a positive result for a new treatment than for an established conventional one.
I haven’t checked that claim on the last sentence, so take it with a grain of salt, even though it does make sense.
3-Was the study described as double-blind? No, there was no blinding whatsoever, doctors knew what treatement each patient was getting and patients knew it too (they got to choose remember) – 0 points (cumulative)
Since it was not possible to blind patients for their treatment, potential reporting bias from patient’s expectations may have influenced the outcome.
4-Was the method of double blinding appropriate? (Were both the patient and the assessor appropriately blinded?) There was no double blinding, there wasn’t even single blinding. – 0 points (cumulative)
This is where I would start worrying if I was trying to use this study to prove my point. We’re up to question 4 of 5 and they have 0 points!
5-Was there a description of dropouts and withdrawals? Not even close, they only mention that 6 people who got no treatment were dropped. – 0 points TOTAL!
This study ranks as possibly the worst designed study you could come up with. There was no randomization, no blinding of any sort, let alone double, no control group, in other words nothing that would lend it even a slight amount of legitimacy. The authors seem to have benn aware of this, for they make sure to make the following point: None of that stuff really matters, our study is good enough as it is! Notes in red are my comments.
Objective data collection and evaluation is needed to assist physicians in patient care and advance the quality of medical practice  This study will presumably be objective!. Clinical trials, especially randomised controlled trials (RCTs), are generally accepted as producing the highest level of evidence for medical interventions. I feel there’s a “but” coming! Driven by the discovery of new pharmaceutical substances, demands from regulatory authorities for clinical data and the need of physicians for evidence based treatment strategies, the methodology of RCTs became the subject of research itself. Within this context, the strengths and weaknesses of such trials have been debated . Placebo-controlled RCTs are indispensable for the development of pharmaceutical agents with unknown efficacy and safety profiles Such as maybe homeopathic agents. On the other hand if the efficacy and safety of an agent is known why would one even bother to do a study?. Their limitations result from highly standardized study protocols and patient populations, which may create artificial situations that differ from daily practice Oh, I see they are more tightly controlled and have stricter requirements, and THAT makes them problematic. What? . Moreover, even the fact that patients are enrolled into a placebo-controlled clinical trial will influence treatment outcome, sometimes leading to high placebo or low verum response rates  Somehow I did not think it was a matter of high or low, I thought it was a matter of the truest measure which is the point of the control groups. Further, proper blinding should guarantee the truest results possible. Consequently, more practice-based studies have been developed such as pragmatic RCT’s or non-randomised cohort studies. In other words, when you can’t live up to these standards make up more lax standards and claim they are just as good. Pathetic! Especially non-interventional outcomes studies have only few inclusion and exclusion criteria. Therefore they may provide information about a broad and heterogenous patient population thus resulting in high external validity for daily medical practice Actually lack of controls will result in exactly the opposite, it will be useless for daily medical practice. It may provide a good gauge for people’s ability to deceive themselves though. However, the fact that patients are not randomly assigned to treatments in such outcome studies may lead to baseline differences between groups and makes the interpretation of the results more susceptible to bias. May? That’s putting it mildly! This disadvantage may be overcome, at least in part, by the application of statistical methods to control for baseline differences between treatment groups No it can’t, otherwise randomization would not be required, EVER. Good statistics can never make up for bad data. Statistics rely on the data itsel. The above claim makes no sense!
Fun Fact –
Apart from the ongoing discussion about clinical evidence, complementary therapies are well integrated into primary care in most Western countries
Yeah appart from the fact that CAM has not been shown to work, IT IS POPULAR. Good enough for me!
This study is horrendously designed. It lacks all of the basic requirements that every clinical trial should have, such as randomization, double blinding, control group etc. Based on that fact alone, regardless of the sample size, regardless of how careful and precise the statistics, the results of such study will be completely unreliable. The data set is corrupted due to the lack of controls, as such it does not matter how carefully you analyze it, the result would be meaningless. Even if it had told us that homeopathy is useless, we would still have to ignore it. And ignore it, I, we and all the science based community will. Sorry homeopaths, you’re still stuck at 0. Good luck next time.
The primary goal of science-based medicine (SBM) is to connect the practice of medicine to the best currently available science. This is similar to evidence-based medicine (EBM), although we quibble about the relative roles of evidence vs prior plausibility. In a recent survey 86% of Americans said they thought that science education was “absolutely essential” or “very important” to the healthcare system. So there seems to be general agreement that science is a good way to determine which treatments are safe and work and which ones are not safe or don’t work.
The need for SBM also stems from an understanding of human frailty – there are a host of psychological effects and intellectual pitfalls that tend to lead us to wrong conclusions. Even the smartest and best-meaning among us can be lead astray by the failure to recognize a subtle error in logic or perception. In fact, coming to a reliable conclusion is hard work, and is always a work in progress.
There are also huge pressures at work that value things other than just the most effective healthcare. Industry, for example, is often motivated by profit. Institutions and health care providers may be motivated by the desire for prestige in addition to profits. Insurance companies are motivated by cost savings. Everyone is motivated by a desire to have the best health possible – we all want treatments that work safely, often more so than the desire to be logical or consistent. And often personal or institutional ideology comes into play – we want health care to validate our belief systems.
These conflicting motives create a disconnect in the minds and behaviors of many people. They pay lip service to science-based medicine, but are good at making juicy rationalizations to justify what they want to be true rather than what the science supports. We all do this to some degree – but, in my opinion, complementary and alternative medicine (CAM) is a cultural institution that is built upon these rationalizations. It is formalized illogic and anti-science conceals as science under a mountain of rationalizations.
Some recent news items and reports dealing with acupuncture demonstrate this disconnect quite well.
Science as it is practiced today relies on a fair measure of trust. Part of the reason is that the culture of science values openness, hypothesis testing, and vigorous debate. The general assumption is that most scientists are honest and, although we all generally try to present our data in the most favorable light possible, we do not blatantly lie about it or make it up. Of course, we are also all human, and none of us is immune to the temptation to leave out that inconvenient bit of data that doesn’t fit with our hypothesis or to cherry pick the absolutely best-looking blot for use in our grant applications or scientific manuscripts. However, scientists value their reputation among other scientists, and there’s no quicker way to seriously damage one’s reputation than to engage in dodgy behavior with data, and there’s no quicker way to destroy it utterly than to “make shit up.”
True, opposing these forces are the need to “publish or perish” in order to remain funded, advance academically, and become tenured, a pressure that can be particularly intense among basic scientists, who will basically lose their jobs and very likely their academic careers if they cannot cover 50% or more of their salaries through grants. I always remember that I’m fortunate in that, even if I failed utterly to renew all my grants and burn through whatever bridge funds my university might give me, I’d be unlikely to be fired, as I could just go back to operating full time. Indeed, I’d even be likely to generate more income for my department by doing surgery than I could through research. Clinician-scientists are in general a drag on the finances of an academic department.
Despite the pressures, however, I’m still left scratching my head over this recently revealed massive scientific fraud, as reported in Anesthesiology News, the Wall Street Journal, and the New York Times. A bunch of you sent it in to me, and when that happens, I usually conclude that I’d best comment on it. First, the fraud:
Academia is notoriously resistant to change, which to some extent is a good thing. It was therefore no surprise that when Wikipedia became a phenomenon most academics scoffed at it as a passing fad, fatally flawed by its very core idea: anybody, and I mean anybody, can become a Wiki author and post new entries or edit existing ones. Surely, this will inevitably lead to chaos and complete unreliability, the critics said. But a few years ago a study of a sample of entries compared the accuracy of Wikipedia with that of the unquestionably prestigious Encyclopedia Britannica, and Wikipedia was at least as accurate, in some cases more.
Of course the “open access” model does have its limits and defects, and even Wikipedia has to maintain a certain amount of vigilance and label particular entries as contentious or unreliable if there is too much traffic and a lot of editing and counter-editing (typically concerning political issues or individual politicians). Still, from apparent chaos the system has allowed for the emergence of a reasonably reliable first-look reference source that truly exploits the power of the internet.
It seems that the next case will come from another sacred cow of academia: peer review. This is the system used by modern academics — both in the sciences and the humanities — to evaluate a scholarly paper before it is published, the chief gateway to insure the high quality of a publication, be it in philosophy, literary criticism, medicine, physics, or what have you. The way it usually works is that an author submits a paper for consideration to the editor of a journal in the appropriate field. The editor makes a first assessment of the manuscript and, if deemed suitable to the journal, sends it out to two or more reviewers, chosen from among people actively engaged in research and scholarship in the field addressed by the submitted paper.
A certain amount of time later (an amount of time that can be irritatingly long for the authors), the reviews come back with a thumbs up or down verdict, usually accompanied by (anonymous, and sometimes nasty) comments for the authors — so that they may revise the original manuscript and send it back to either the same journal (if so invited) or to another one. The process repeats itself until either the paper finds its way into a publication or is forever abandoned on the heap of wasted efforts.
The peer review system has its obvious advantages as a gatekeeper for academic publishing quality, but it has equally obvious drawbacks. First of all, the number of reviewers is fairly small, which means that the comments the authors receive may be reflective of the idiosyncratic views of those individuals, and may not necessarily constitute a good assessment of the general value of the paper. Second, often (though not always) the authors don’t know who the reviewers are, but the converse is not true, which leads to the temptation of stabbing a rival (or a rival’s student) in the back.
One can argue that the real peer review actually takes place over a period of years after the paper (or book) has been published, and it is the result of how, in the long-term, the community at large values the scholarship of the authors. Some papers and books are cited often, some become classics in their field, most are never heard of again — which in itself is not necessarily an indication of poor quality, but may be a simple reflection of the fact that too many people publish too much.
What I will call the classic peer review system, the one that relies on a small number of editor-selected referees, however, is increasingly under challenge. In the physics community, for instance, it has been normal practice for years to post pre-publication versions of one’s paper on internet servers, to get feedback from the rest of the community before formal submission. People can now refer others to these pre-prints by hyperlinks, almost as if they were actual publications, thereby blurring the distinction between formal and informal scholarship. Moreover, an increasing number of open access journals now encourages readers’ comments and even rankings to be posted for each paper, occasionally allowing authors to respond and engage in an open dialogue with the community.
This is, I think, a trend that is here to stay, and that will likely completely change the meaning and practice of academic research over the next decade or so. Still, perhaps the most spectacular — if somewhat under-reported — case of open peer review showed how the blogosphere can be a more effective guardian of scholarship than a small number of overworked editors and reviewers.
What happened was that two people affiliated with Inje University in Korea, Mohamad Warda and Jin Han, submitted a paper to the prestigious journal Proteomics. The paper was entitled “Mitochondria, the missing link between body and soul: Proteomic prospective evidence,” something that should have alerted the Editor, Michael Dunn, and the reviewers that something was amiss (a proteomic paper on dualism and the question of the soul?). Warda and Han’s review of the literature was meant as a criticism of the currently accepted theory that the mitochondria (the cellular organelles that are involved in the production of the energy that keeps the metabolism of the organism going) are the result of an evolutionary endosymbiotic event; in other words, that they originated from the engulfment of a bacterial cell by an ancestor of modern plants, animals and fungi.
Warda and Han wrote: “Alternatively, instead of sinking into a swamp of endless debates about the evolution of mitochondria, it is better to come up with a unified assumption. … More logically, the points that show proteomics overlapping between different forms of life are more likely to be interpreted as a reflection of a single common fingerprint initiated by a mighty creator than relying on a single cell that is, in a doubtful way, surprisingly originating all other kinds of life.”
It is difficult to make sense of the badly written phrase (no language editors at Proteomics?), but surely the reviewers should have been a bit surprised by the obviously unscientific phrase “a mighty creator.” Regardless of whether one thinks that concepts like soul and divine creators make any sense at all (I don’t), they surely do not belong to an ostensibly scientific paper. I am not at all suggesting that Dunn or his reviewers are intelligent design creationists: they simply missed the supernatural references, presumably because they were too busy and distracted by the mountain of very technical language surrounding that specific phrase (though how they missed the title is a bit more difficult to rationalize away).
The happy ending to the story is the result of the normal practice that Proteomics has, together as do many other journals, of posting papers on their web site before they are actually printed. According to an article in the National Center for Science Education Reports, the first to note the oddity of Warda and Han’s paper was Steven Salzberg, a professor of computer science at the University of Maryland, who blogged about it. That led to blog posts by Attila Cordas, Lars Juhl Jensen and PZ Myers, and eventually to the editor of Proteomics requesting a withdrawal of the paper by the authors, who complied.
Interestingly, the request to withdraw was not based on the creationist claim, but on the fact that the bloggers had uncovered another problem with the paper that had escaped reviewer and referees: the entire body of the article by Warda and Han had been plagiarized from other, already published, sources! Apparently, their only original contributions were writing in really awful English and references to the soul and the mighty creator.
The moral of the story is that the much maligned blogosphere (“you know, anybody can write whatever they want, and nobody’s checking”) in this case clearly surpassed the official, academically sanctioned system of peer review. My hunch is that this isn’t going to be the last time this happens, and that we are looking at the dawn of a new era of academic practice, when papers will be scrutinized by thousands of reviewers within a matter of hours of publication. If we can harness this tremendous intellectual power in a reasonably ordered fashion, we will make the next leap toward a truly worldwide community of scholars and authors.
As some of you may have noticed, I have just added a new page to Skepfeeds called “Important Studies“. In this page I will try to gather links to studies important to the skeptical cause. As we all know, our skeptical attitude about incredible claims must be backed up by facts and logic, lest we become as dogmatic as the woo-woo meisters we try to expose. Thus this page was born. Whenever I run accross a new study that has importance for any given area of woo, such as for example acupuncture or chiropractic, I will then post a link with a short quote from the conclusion of said study. Please send me any links to such studies at email@example.com as I need to populate this page as quickly as possible. Thank you very much and please do help in any way you can!
Speaking of science and religion, I got significantly annoyed by a short piece in Nature magazine by Michael Bond (13 November 2008). Bond reviews two recent books on Buddhism and science: “Mind and Life: Discussions with the Dalai Lama on the Nature of Reality,” by Pier Luigi Luisi, and “Buddhism and Science: A Guide for the Perplexed,” by Donald S. Lopez.
I keep being baffled by the fact that so many scientists think it is a cool idea to engage in absurd fits of mental acrobatics so that one can claim that religion, after all, is not in contradiction with science, and in fact can even be somewhat helpful. Granted, Buddhism certainly doesn’t have the same attitude that, say, Christianity and Islam have about science, but there still is a lot of unnecessary fluff that gets thrown around in this misguided quest for a unity between science and religion.
For instance, Bond says that “science and Buddhism seem strangely compatible … [because] to a large degree, Buddhism is a study in human development.” No, it isn’t. Certainly not in the scientific sense of “study.” Buddhism, like all mystical traditions, is about introspection, notoriously a remarkably unreliable source of “evidence.” In that sense, Buddhism is much closer to some continental philosophical traditions based on phenomenology and first-person subjectivity than to science — the quintessential third-person approach to the study of natural phenomena.
Second, Bond contends, Buddhism has an energetic “champion of science” in the current Dalai Lama. That may very well be, but of course this wasn’t the case with past Lamas, nor is there any assurance that it will continue to be with the next one. This hardly seems grounds for claiming “strange compatibility.” True, the current DL has said that if science should ever find a notion endorsed by Buddhism to be not true “then Buddhism will have to change.” It certainly sounds a heck of a lot better than the usual nonsense coming from creationists and intelligent design proponents.
But a moment’s reflection will show that this is a pretty empty statement on the Lama’s part, as much as I don’t doubt that he really meant it. What sort of Buddhist concepts could possibly be proven wrong (or right) by science? Buddhism, again like all mystic traditions, phrases its teachings in such vague language that they are simply not amenable to rational, let alone strictly empirical, analysis. Are we one with the universe? Not really, unless one means that we are made of the same basic stuff as everything else, which I don’t think is what Buddhism means. And even if it meant something like that, to claim congruence with science leads to the same anachronism committed by people who say that the atomist philosophers of ancient Greece had “anticipated” the discoveries of modern physics. No, they didn’t, they were working out of metaphysical presuppositions, did not do any mathematical or experimental work, and most certainly didn’t mean what we do by the term “atom.”
Bond goes so far as to suggest that there is an area of research where Buddhism actually has achieved more than what science has produced so far: when it comes to studying consciousness, he says, Buddhism offers “a kind of science of introspection.” It’s worth quoting Bond in full here: “Whereas cognitive science’s best guess is that consciousness is an emergent property of neuronal organization, Buddhists see it at some pure subtle level as not contingent on matter at all, but deriving instead from ‘a previous continuum of consciousness” — the Dalai Lama’s words — that transcends death and has neither beginning nor end.”
Wow. Where to begin? How about with the observation that “a science of introspection” is an oxymoron? As I mentioned above, introspection is certainly a rich kind of experience that can be cultivated for one’s own edification, but it is not and cannot be “science” because science is based on the idea of independent verification of empirical findings. Second, that consciousness is an emergent property of neuronal organization is much more than a “guess,” as serious research in neurobiology has made stunning progress in identifying specific regions of the brain that provide the material basis for specific aspects of the conscious experience. And finally, what on earth is even remotely scientific about completely unfounded and even literally meaningless claims about a “continuum of consciousness”? Continuum means adjacent, to what would consciousness be adjacent, pray?
Look, Buddhists have all the rights to believe all the fluff they want, just like anyone else. And unlike fundamentalist Christians they at least don’t pretend to teach their mysticism in science classes. But why do religionists crave so much the recognition of science, beginning with creationists themselves? (After all, they talk about “creation science,” and “intelligent design theory.”) And why do some scientists lend credence to the Dalai Lama, the Pope, and whoever else invites them for a weekend in Rome or in Dharamsala? The best that can be said about science and religion is that they have nothing to do with each other, and most certainly nothing to teach to each other. Let’s not pretend otherwise for the sake of cultural correctness.
A few days ago I wrote an entry titled “Sacred Geometry-Sacred Nonsense?” in which I replied to an entry about sacred geometry posted at the blog “Beyond the Blog”. Me and that blog’s author, Anthony, had a nice discussion in the comments section of that entry of mine. Now Anthony has a new entry titled “The Science Gene” and I have, yet again, some issues with what Anthony has to say. On my previous entry I was told that I did not get the meaning of what he was saying, so I will read this carefully to make sure that this acuse cannot be thrown my way this time.
In his latest entry Anthony is talking about the paranormal and scientists. He says:
When it comes to this modern breed, I immediately fall into the same category as anyone else who is prepared to give the paranormal a chance.
I never expected any different. The general scientific acceptance of curiosity may work for most areas of life and the universe, but regarding the paranormal, there is a form of mental block. Simply considering the subject is enough to be discounted.
Now the term paranormal is a wide umbrella that encompasses lots of things from homeopathy and acupuncture to psychics, ESP, remote viewing, astral projections, psychic surgery etc etc. Some of these fields, especially the medical related ones such as homeopathy and acupuncture have been studied deeply by the modern science types. I am not sure if Anthony includes skeptics in the “modern breed” category. Organizations such as the James Randi Educational Foundation have been spending lots ant lots of time testing every kind of imaginable supernatural/paranormal claim. In fact Randi’s million dollar challenge remains unclaimed decades after it was instituted.
What group does Anthony think he falls under? It seems that the implication is the “ignored with a wave of the hand” group. In fact many proponents of the paranormal usually throw that sort of argument around. Oh, the scientist are too arrogant that they don’t even look into our claims, they just discount them out of hand. But is that true? Let’s look at this carefully. As I mentioned plenty of studies have been done by scientist on many paranormal/supernatural claims (and yes acupuncture with it’s chi and ying and yang nonsense is totally paranormal and so is homeopathy with its law of attraction/similarities and the dilution nonsense). Psychic abilities also have been tested extensively by the JREF.
But let’s stay on track here. By definition the paranormal/supernatural are beyond natural, they are out of this natural world. Science, also by definition, is concerned with natural explanations and does not, cannot, get involved with stuff that is supposed to be outside of nature. How do you test something when it is defined as being untestable by the tools of science? How do you test psychic abilities if psychics will rationalize (usually after the fact, after having failed miserably) that their powers wane and go away under test conditions? How do you test something which is supposed to work all the time, except when it is being tested under a controlled environment?
So can we blame scientists EVEN IF they did completely ignore supernatural explanations? It is not fair to blame them for not doing something which they cannot do right? Science test hypotheses, but the hypothesis itself has to be testable. If you define things so that they fall beyond the natural, beyond the testable then you cannot experiment with them, you cannot study them properly speaking. Some paranormal claims are of this nature 100% (GOD) whereas others are not completely this way. Therefore some are more suitable for scientific testing and some are less, depending how they are defined by their proponents.
Which takes me back to my original question, which group does Anthony feel like he’s being included with? The group that has been tested but has not been shown to work? Or the group that by it’s very own definition cannot be approached scientifically? Now if you belong to the first one, is it really a surprise that after study after study failing to even hint that such things work, scientist would say enough, I will not test the same idea anymore? Is it really unreasonable at this point to say that anyone who comes to me with the very same argument, without a new hypothesis, without new data, without some preliminary test, will not get anymore of my time? I don’t think so.
And if Anthony thinks he’s being lumped into the second group, well then in that case he’d be disqualifying himself from scientific review and the blame should not be thrown the scientist way.
Could it be down to a simple inability in them to comprehend the subject? Certainly it appears so.
That is unfair to say the least. In fact I submit that skeptics and scientists understand more of the various paranormal subjects than more people that blindly believe in them. We understand how psychics are supposed to perform their tricks. We understand how homeopathy and acupuncture are supposed to work miracles. We know how healing prayer is supposed to work. We do. But we are not convinced. If there is an inability, it is one to believe extraordinary claims based on very flimsy evidence. Yes, I confess to that inability.
Behaviour is said to be down to nature or nurture. The former is due to our genes, whilst the latter is said to be to do with our upbringing, etc. Yet I’ve recently begun writing about a third factor in this equation.
Culture could play an important part.
We exist in culture. We are labeled through our culture. Our knowledge is very much a part of our culture. Hence, culture plays an important part in our behaviour.
Now this is more of a technical gripe. Culture does play an important part, that he’s right about. But culture is not a third element. Culture is included in nurture and upbringing. I just wanted to point that out. Not a biggie but it helps to straighten everything that needs straightening I think.
But could it be that changes in culture lead, over several generations, to changes in the behavioural elements of our genetic structure?
That’s an interesting question to entertain I think, but I find it very hard to accept that some behavioral trait that is not genetic in any sense can somehow be transferred to the genes. Very very doubtful to say the least. Do we have any geneticist that read this blog that could shed some light on this area? I am in no position to say conclusively either way, but I lean towards no right now. Anthony offers another possibility:
We talk of change through the ‘meme’, but I’m suggesting here that it could be a real genetic influence, and not just a concept. In effect, what we are is not enshrined in genetic stone, but fluid. We change as our culture directs.
As with evolution generally, the culturally fittest ideas could well survive to be conditioned into the person. Hence behaviour – the cultural prevalence of the religious or scientific impulse, for instance – can be programmed into the person.
How would this programming happen? What is the mechanism being proposed? And don’t give me a supernatural explanation please.
Does this give a hint of a reason for science’s intransigence when it comes to the paranormal? I don’t know. But it should be discussed, for it suggests that the ‘natural’ bias against the paranormal is not ‘natural’ at all, but the result of a form of cultural brainwashing
Nope! Actually this possible conclusion that Anthony offers is based on a very very weak speculation (behavior that is not genetic in nature can be programed into the genes) and when the foundation is week the whole building will collapse. I understand that Anthony is not claiming that this is in fact what is happening. Nevertheless, he is offering a possible explanation about the science gene, programed via countelss cultural scientific brainwashing over the generations , which makes scientists ignore the paranormal. Very neat philosophically, but way to speculative scientifically.
So where do we go from here? Someone who thinks this hypothesis has any merit should first start with the claim about the “cultural programing” and establish that this claim is probable. When that is established, then they need to get to work on this “science gene” and identify a possible gene candidate, I guess by running genetic profiles of scientist and looking for common genes and what have you. Then, you need to devise a test to figure out if said science gene does affect attitudes towards the supernatural. That is in a nutshell the proper way of approaching this. Remember, just thinking up a hypothesis is not enough even if it seems to make sense. We could sit around discussing ideas all day long and nothing would come out of it unless we actually did the work to test them.
Indeed, it suggests that, in terms of behaviour, nothing is ‘natural’ at all. Rather, we are fluid receptors of change and ideals produced by an over-culture of our collective behaviour and ideas.
Baloney! Fight or flight is not cultural under any sense. It’s much more primitive than any human culture. Generalizations like this are very dangerous. Whenever one say nothing, or everything, they are open to all sorts of criticism as I hope I just showed here. This statement I completely disagree with.
I think I’ve figured out the differences between Anthony and I. It seems to me that it comes down to possibility and probability. It seems Anthony considers many possibilities but does not take into account the probability. What he has just described in his entry is possible, sure, but very very improbable. And scientist and skeptics look at both possibilities (hypotheses) and probabilities (experimental results) and when the probabilities remain very very low we just stop wasting time with the possibility and unless new eveidence is presented to raise the probability it makes no sense to go back over and over to the possibility. That is not a fault in my eyes. It is a virtue. Am I making any sense?
I don’t like to repost, but Steve Novella has some great pieces up right now, and this is directly related. –PalMD
s I’ve clearly demonstrated in earlier posts, I’m no philosopher. But I am a doctor, and, I believe, a good one at that, and I find some of this talk about “non-materialist” perspectives in science to be frankly disturbing, and not a little dangerous.
To catch you up on things, consider reading one of Steve Novella’s best posts ever over at Neurologica. While you are there, you can also follow his debate with neurosurgeon Michael Egnor, the latest guru of mind-body dualism.
To sum up (remember, IANAP), most of us science-y types hold to a materialist view of reality, that is, reality is all there is. This reality is susceptible to the investigations of science. Non-materialists and mind-body dualists hold that there is also a “non-material” reality. What exactly this might be, and how one might observe or measure it is never specified. Instead, they usually use a god-of-the-gaps argument, whereby any gaps in scientific understanding are automatically ascribed to the supernatural. The proof of the supernatural is stated is a lack of disproof of the supernatural.
Personally, I have no problem with people believing in God, Satan, fairies, or the Flying Spaghetti Monster (may we all be touched by His Noodley Appendages). What I have a problem with is people applying these beliefs to science and medicine.
Non-scientific medical practices, such as homeopathy, faith-healing, and reiki state various claims of efficacy and of mechanism of action. They can never prove these, but ask us to take their word, and the word of their clients. Once again, if someone takes communion and feels closer to their God, it’s none of my business. But if someone is claiming to affect the health of an individual by invoking supernatural powers, this is immoral and harmful
The point is simple: if reiki manipulates unseen, unmeasurable forces by unseen and unmeasurable means, creating solely subjective individual results, then reiki (and practices like it) is completely irrelevant to health. What matters in medicine is results, and results that cannot be observed and measured do not, for all practical purposes, exist.
We can measure the effect of beta blockers on a population of heart attack survivors. We can compare the number of subsequent heart attacks in those who do and do not receive the drug. We can come up with a scientifically valid explanation for the results, and we can replicate them.
None of the cult medicine practices that are so popular can do this. Their effects are either unmeasurable by definition (show me a qi), or when we try to measure the results of their application, results in aggregate are no better than by chance alone.
In all this discussion about naturopathy over the last week or so, what has been left out is that it doesn’t matter if naturopaths consider what they do to be “medicine-plus”—the plus is irrelevant because it cannot be measured or observed reliably. Unless and until it can, forget the “plus”. It’s only a dream.
Stuart Buck persists in claiming that scientists have a bias against the supernatural, and that we dismiss it out of hand. This isn’t true; the problem is that supernatural explanations are poorly framed and typically unaddressable, so we tend to avoid them as unproductive. What one would actually find, if one took the trouble to discuss the ideas with a scientist, is that they are perfectly willing to consider peculiar possibilities if they are clearly stated. We’ll even briefly consider something as insane and worthless as astrology, which is even less credible as a field of study than Intelligent Design.
Here’s an example from years ago on Usenet, in the newsgroup sci.skeptic. An astrologer, Thomas Seers, was insisting that his weird little pseudoscience was a suitable topic for a science course. One of the skeptics, Robert Grumbine, politely asks him for specifics:
Robert Grumbine: Let us say that I teach astronomy. Let us suppose I’ve decided to spend an hour on astrology. What would my presentation be? Keep in mind that this is a science class, so part of my job would be to discuss what experiments have been done that demonstrate that it works, as well as to describe how it works (in the sense of how the students could make the predictions themselves).
Thomas Seers: Hello Robert,
You appear to be asking a serious question, so I will give you an experiment to try. This will also give you an insight to what Alchemists did years ago.
On 10/20/99 from 2 AM to 8 AM EDT, mix a bowl of jello and you will find it won’t jel. My basic students have this as a homework assigmnet to learn of a void-of-course Moon period. Silly thing, huh. It can be repeated over and over again. Don’t spill it now :-).
There are many words I could attach to the dangerous freakshow that is Jenny McCarthy – self-made advocate for the pseudoscientific notion that there is a link between vaccines and autism: deluded, self-righteous, irrational, the Mayor of Wooville, etc. But I am always interested in the process that gets people to their profound confusion. I believe at the core of Jenny McCarthy’s tragic crusade is an utter lack of humility.
Her lack of humility also seems consistent with someone who has never risen to a level of competence, let alone mastery, in any intellectual discipline. Those who have understand on some level the value of excellence and expertise, and the gulf that separates superficial public knowledge (or what has been called in the internet age, the University of Google knowledge) from a functional depth of understanding.
This brings to mind yet another word that could apply to McCarthy – sophomoric. She has garnered just enough knowledge to think she knows what she is talking about, but not enough to appreciate the depths of her own ignorance.