Results 1 to 10 of 13
-
12-28-2010, 09:47 PM #1
- Join Date
- Aug 2006
- Location
- Maleny, Australia
- Posts
- 7,977
- Blog Entries
- 3
Thanked: 1587An Interesting Read: Is the Scientific method flawed?
A colleague passed this article in the New Yorker on to me. I found it interesting, but unfortunately nothing I haven't already experienced countless times over the past 15 years as an academic and consulting statistician. However, it is heartening to see that the more "mainstream", non-scientific, literature has picked up on it.
I'd be interested to hear others' views. It is not an overly long read, and well worth it if you can spare 10 minutes.
The Decline Effect and the Scientific Method : The New Yorker
James.<This signature intentionally left blank>
-
12-28-2010, 11:50 PM #2
Took me thirty minutes to read the article but that could have been the decline effect. OTOH, maybe Jimbo was 'selective reporting' when he estimated ten minutes.
The one guy who said, “But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.”
Reminds me of my grandfather saying,"Believe half of what you see and nothing that you read." Glad some people can still have faith in science.Be careful how you treat people on your way up, you may meet them again on your way back down.
-
12-29-2010, 12:31 AM #3
I read it too. I thought the reasoning, cleaverly hidden at the end of the article points out the likely flaw.
In case Gugi reads this thread though I must be clear that testing is better than no testing, in the scientific approach.
Anyway, my only thought is that as more testing is done the results become diluted, particularly if your bias at obtaining results. I liked the description of the cards and how the occurance was only "one in a million". But if you are looking at only one occurance you could be looking at the millionth result and not know it. You could have two "one in a million" results back to back. If you stop testing there you would be convinced that you were right, even if the results were not repeatable in two million more events.
-
12-29-2010, 01:11 AM #4
Ya cannae change the laws of physics Jim ! .... hehehe ..... or can you ???
The white gleam of swords, not the black ink of books, clears doubts and uncertainties and bleak outlooks.
-
12-29-2010, 01:27 AM #5
I must admit, I did not read the whole article. But a few things struck me:
"If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?"
I don't really think replication is what separates science and pseudoscience. Rather, science accepts the possibility of falsification when replication fails, but pseudoscience does not. In this regard, it's not the ability to replicate results, but the willingness to modify theories when replication fails, that sets science apart from pseudoscience. To me, that means that if the "scientists" discussed, for whom replication fails, are really scientists, they will modify their theories rather than attempting to make new results fit old theories. To do otherwise would be pseudoscience.
With that in mind, I thought it interesting that all of the examples that I read (through the part about symmetry) were based is psychology, which is considered pseudoscience or soft science by a LOT of people. Nothing against psychology - I think it may one day become a science, but I don't think it's there yet. Too many unknowns, too many variables, not enough development of the field.
An interesting side point is that there are no examples (that I read) showing any problems with replication in the hard sciences.
-
12-29-2010, 01:30 AM #6
I think the scientific method is secure. The issue is the way people are interpreting and carrying out their research. There is too much pressure from funded studies sponsored by corporations who are results orientated or political pressure and I'm afraid shortcuts are taken or the original theory isn't as well thought out as it should.
No matter how many men you kill you can't kill your successor-Emperor Nero
-
The Following User Says Thank You to thebigspendur For This Useful Post:
AlanII (12-29-2010)
-
12-29-2010, 01:40 AM #7
- Join Date
- Nov 2009
- Location
- Middle of nowhere, Minnesota
- Posts
- 4,624
- Blog Entries
- 2
Thanked: 1371I am not a statistician, and I don't do scientific research.
I do spend a lot of my time reading medical research and people's conclusions based on that research. I have seen a lot of published studies that (seem to me) have sample sizes far too small to be considered significant, and often there are no follow up studies or experiments.
Interestingly, there are a lot of things in generally accepted medical guidelines that are based on sample sizes of 15 participants or so.
Strange women lying in ponds distributing swords is no basis for a system of government.
-
12-29-2010, 01:50 AM #8
- Join Date
- Oct 2006
- Posts
- 1,898
Thanked: 995That's an interesting way to rewrap the difference between a Type 1 and a Type 2 research error.
What is more fascinating than his critique of research methods in his example of second generation antipsychotics is that those drugs are heavily marketed directly to the public. This increases sales and the public's belief that the drugs must be good. Despite the fact that clinicians can see problems developing with both effect and side effects, the patients demand the medication which leads to a self fulfilling rationale for the use of the drug despite false positive evidence.
Well, I thought the article made sense. There is a lot of bad research out there. I had professors who would support replication by the students, all the while having to suffer the pressure of publish-or-perish to gain their tenure.“Nothing discloses real character like the use of power. Most people can bear adversity. But if you wish to know what a man really is, give him power.” R.G.Ingersoll
-
12-29-2010, 02:39 AM #9
I think one of the problems is human nature and the tendency to CYA. We see it in all human pursuits, every walk of life. As far as the psychotropic drugs go .... I can't help but think of pharmaceutical companies who recently hid the research that their meds caused heart disease, stroke .... whatever it was. Many years ago cigarette companies that denied smoking related illness through their scientific studies and there were the asbestos companies who also hid the real data. Those are examples of deliberate deception but than there are the things we thought science proved.
Now we are asking should women have mammograms, men have prostrate exams and surgery if it is a positive PSE ? Cat scans used to be so common and now they find the radiation exposure may be worse than the disease they are looking for. I can remember when all of that was the 'known' science. We think we know a lot, the human race. Well 'we' just topped another billion in population stuck on an earth with finite resources. Science better get it's ass in gear or it is going to be a hell of a place to live if it lasts another hundred years. Pleasant dreams.Be careful how you treat people on your way up, you may meet them again on your way back down.
-
12-29-2010, 02:57 AM #10
This is a very interesting, and timely, topic (for the Medical Industry). My career is selling medical devices. So, we deal with studies regularly.
Recently, there was a study done showing vertebroplasty to be an ineffective treatment for patients with vertebral compression fractures when compared to a "sham" procedure. Well, to make what could be a very long story short, there were very well documented problems with the methodology used, patient population tested, arbitrary shortening of the followup period, and a "sham procedure" that literally is effective for approximately one year (oh yeah, the shortening of the follow up was to one year)..
Even with all those problems, another 13 patients or 3 months extended follow up would have shown a statistically significant benefit. This is born out by the disclosed fact that 47% of the patients who received the sham procedure crossed over to have a vertebroplasty done after the study was over (and showed improvement). In fact, the investigator wrote in his introduction that "vertebroplasty is an efficacious treatment"... But, the New England Journal of Medicine made him take that line out in order to publish the piece (this information is according to conversations with the investigator in his office).
The problem isn't with the method, it's with what people do with the methods and how they interpret information. I'm afraid that these types of studies are going to be more and more common in immediate future as people try to figure out ways to decrease the cost of healthcare. Blue Cross Blue Shield is already trying to stop reimbursements for fusion for degenerative disc disease.