The memory of water! The journalists were mesmerized by Benveniste's work published in Nature (1988). We told you ! Homeopaths screamed. It was basophil degranulation experiment, wherein basophils were challenged with allergens or ant-IgE, Upon activation, allergen - IgE (IgE-antiIGE) complex binds to basophil cell membrane eliciting exocytosis of histamines, proteases, chemiokines etc in the medium, which otherwise form granules in basophils. Relative presence of granules before and after the challenge can then be measured using Toluidine dye. What was provoking in the Benvensite's study was that they performed serial dilution of ant-IgE solution upto 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 times to challenge basophils and and could still observe the degranulation. Following is the figure from the article:
It is clear that with that astronomical dilution, there will not be any allergen/anti-IgE in the solution. What is left is some kind of memory of anti-IgE (in the form of structure of water?), which is sufficient to elicit response. Sounds silly! It indeed was. Accompanied by the fraud-buster and the magician James Randi, the editor of Nature journal Maddox closely monitored the repetition of the experiment and found no evidence for the claims. Was it fraud? Two of the students in the lab were funded by a Homoepathy drug company. Or, was it an error of human psych?
"Seeing is believing" they say. True, but not when you see what you want to see. I came across this incidence during my PhD days, when I was indulged into lot of in-situ experiments that involved confocal microscopy. The above study also involved microscopic counting of stained cells. What happens in a typical such experiment, we have a control sample (say uninduced B-cells or induced with water not treated by ant-IGE previously) and a test sample (induced sample) and then count the stained cells under the microscope. The problem is that we are counting cells manually through our eyes, which are not independent from our brain. Given that we know which sample is control and which is test, our brain is predisposed with certain bias (probably unintentional). This bias can interfere with the eye-brain coordination and we tend to see what we want to see. In the above experiment, for example, a student will end up counting more of unstained cells in the test samples as compared to control sample, until the experiment is done in a blind folded manner and performed by more than one student. I am not sure if that is being practiced in most of the labs, even today. Moreover, the number of cells to be counted needs to be sufficiently large, atleast 300 or so. I have seen myself that the statistics often changed when I counted 100 cells and then added 100 more. While this variation dropped after counting 250-300 cells. Having said that, I often spot articles having cell-count range 20-30 cells! Wondered what limits the authors to stop counting at 20-30? Is it their laziness or does the microscope break? If that sounds crazy than I have a strong reason to argue that it is possibly the fear of losing the significance of their hypothesis (indeed, their belief). In fact, it also hints that they might have been counting the selected cells that please them. These authors also do not provide the snap-shot of colony or cluster of cells (It is possible at relatively broader resolution) and rather only provide single cells of their choice. It goes without saying the images need to be shown from all three X, Y and Z projections to ensure that the signal is observed from appropriate location inside the cell. Further, often the slides are prepared in a manner that squeezes the cell to omelette-like shape, which is not the native form of cells. This can potentially impact the interpretation of images, for example, of spatial co-localization studies.
I think its time for journals to frame proper guidelines to be followed for the in-situ experiments.
Ref: http://www.nature.com/nature/journal/v333/n6176/pdf/333816a0.pdf
It is clear that with that astronomical dilution, there will not be any allergen/anti-IgE in the solution. What is left is some kind of memory of anti-IgE (in the form of structure of water?), which is sufficient to elicit response. Sounds silly! It indeed was. Accompanied by the fraud-buster and the magician James Randi, the editor of Nature journal Maddox closely monitored the repetition of the experiment and found no evidence for the claims. Was it fraud? Two of the students in the lab were funded by a Homoepathy drug company. Or, was it an error of human psych?
"Seeing is believing" they say. True, but not when you see what you want to see. I came across this incidence during my PhD days, when I was indulged into lot of in-situ experiments that involved confocal microscopy. The above study also involved microscopic counting of stained cells. What happens in a typical such experiment, we have a control sample (say uninduced B-cells or induced with water not treated by ant-IGE previously) and a test sample (induced sample) and then count the stained cells under the microscope. The problem is that we are counting cells manually through our eyes, which are not independent from our brain. Given that we know which sample is control and which is test, our brain is predisposed with certain bias (probably unintentional). This bias can interfere with the eye-brain coordination and we tend to see what we want to see. In the above experiment, for example, a student will end up counting more of unstained cells in the test samples as compared to control sample, until the experiment is done in a blind folded manner and performed by more than one student. I am not sure if that is being practiced in most of the labs, even today. Moreover, the number of cells to be counted needs to be sufficiently large, atleast 300 or so. I have seen myself that the statistics often changed when I counted 100 cells and then added 100 more. While this variation dropped after counting 250-300 cells. Having said that, I often spot articles having cell-count range 20-30 cells! Wondered what limits the authors to stop counting at 20-30? Is it their laziness or does the microscope break? If that sounds crazy than I have a strong reason to argue that it is possibly the fear of losing the significance of their hypothesis (indeed, their belief). In fact, it also hints that they might have been counting the selected cells that please them. These authors also do not provide the snap-shot of colony or cluster of cells (It is possible at relatively broader resolution) and rather only provide single cells of their choice. It goes without saying the images need to be shown from all three X, Y and Z projections to ensure that the signal is observed from appropriate location inside the cell. Further, often the slides are prepared in a manner that squeezes the cell to omelette-like shape, which is not the native form of cells. This can potentially impact the interpretation of images, for example, of spatial co-localization studies.
I think its time for journals to frame proper guidelines to be followed for the in-situ experiments.
Ref: http://www.nature.com/nature/journal/v333/n6176/pdf/333816a0.pdf