Results 1 to 10 of 13
Hybrid View
-
12-28-2010, 09:47 PM #1
- Join Date
- Aug 2006
- Location
- Maleny, Australia
- Posts
- 7,977
- Blog Entries
- 3
Thanked: 1587An Interesting Read: Is the Scientific method flawed?
A colleague passed this article in the New Yorker on to me. I found it interesting, but unfortunately nothing I haven't already experienced countless times over the past 15 years as an academic and consulting statistician. However, it is heartening to see that the more "mainstream", non-scientific, literature has picked up on it.
I'd be interested to hear others' views. It is not an overly long read, and well worth it if you can spare 10 minutes.
The Decline Effect and the Scientific Method : The New Yorker
James.<This signature intentionally left blank>
-
12-29-2010, 01:27 AM #2
I must admit, I did not read the whole article. But a few things struck me:
"If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?"
I don't really think replication is what separates science and pseudoscience. Rather, science accepts the possibility of falsification when replication fails, but pseudoscience does not. In this regard, it's not the ability to replicate results, but the willingness to modify theories when replication fails, that sets science apart from pseudoscience. To me, that means that if the "scientists" discussed, for whom replication fails, are really scientists, they will modify their theories rather than attempting to make new results fit old theories. To do otherwise would be pseudoscience.
With that in mind, I thought it interesting that all of the examples that I read (through the part about symmetry) were based is psychology, which is considered pseudoscience or soft science by a LOT of people. Nothing against psychology - I think it may one day become a science, but I don't think it's there yet. Too many unknowns, too many variables, not enough development of the field.
An interesting side point is that there are no examples (that I read) showing any problems with replication in the hard sciences.
-
12-29-2010, 01:30 AM #3
I think the scientific method is secure. The issue is the way people are interpreting and carrying out their research. There is too much pressure from funded studies sponsored by corporations who are results orientated or political pressure and I'm afraid shortcuts are taken or the original theory isn't as well thought out as it should.
No matter how many men you kill you can't kill your successor-Emperor Nero
-
The Following User Says Thank You to thebigspendur For This Useful Post:
AlanII (12-29-2010)
-
12-29-2010, 01:40 AM #4
- Join Date
- Nov 2009
- Location
- Middle of nowhere, Minnesota
- Posts
- 4,624
- Blog Entries
- 2
Thanked: 1371I am not a statistician, and I don't do scientific research.
I do spend a lot of my time reading medical research and people's conclusions based on that research. I have seen a lot of published studies that (seem to me) have sample sizes far too small to be considered significant, and often there are no follow up studies or experiments.
Interestingly, there are a lot of things in generally accepted medical guidelines that are based on sample sizes of 15 participants or so.
Strange women lying in ponds distributing swords is no basis for a system of government.
-
12-29-2010, 01:50 AM #5
- Join Date
- Oct 2006
- Posts
- 1,898
Thanked: 995That's an interesting way to rewrap the difference between a Type 1 and a Type 2 research error.
What is more fascinating than his critique of research methods in his example of second generation antipsychotics is that those drugs are heavily marketed directly to the public. This increases sales and the public's belief that the drugs must be good. Despite the fact that clinicians can see problems developing with both effect and side effects, the patients demand the medication which leads to a self fulfilling rationale for the use of the drug despite false positive evidence.
Well, I thought the article made sense. There is a lot of bad research out there. I had professors who would support replication by the students, all the while having to suffer the pressure of publish-or-perish to gain their tenure.“Nothing discloses real character like the use of power. Most people can bear adversity. But if you wish to know what a man really is, give him power.” R.G.Ingersoll
-
12-29-2010, 02:39 AM #6
I think one of the problems is human nature and the tendency to CYA. We see it in all human pursuits, every walk of life. As far as the psychotropic drugs go .... I can't help but think of pharmaceutical companies who recently hid the research that their meds caused heart disease, stroke .... whatever it was. Many years ago cigarette companies that denied smoking related illness through their scientific studies and there were the asbestos companies who also hid the real data. Those are examples of deliberate deception but than there are the things we thought science proved.
Now we are asking should women have mammograms, men have prostrate exams and surgery if it is a positive PSE ? Cat scans used to be so common and now they find the radiation exposure may be worse than the disease they are looking for. I can remember when all of that was the 'known' science. We think we know a lot, the human race. Well 'we' just topped another billion in population stuck on an earth with finite resources. Science better get it's ass in gear or it is going to be a hell of a place to live if it lasts another hundred years. Pleasant dreams.Be careful how you treat people on your way up, you may meet them again on your way back down.
-
12-29-2010, 02:57 AM #7
This is a very interesting, and timely, topic (for the Medical Industry). My career is selling medical devices. So, we deal with studies regularly.
Recently, there was a study done showing vertebroplasty to be an ineffective treatment for patients with vertebral compression fractures when compared to a "sham" procedure. Well, to make what could be a very long story short, there were very well documented problems with the methodology used, patient population tested, arbitrary shortening of the followup period, and a "sham procedure" that literally is effective for approximately one year (oh yeah, the shortening of the follow up was to one year)..
Even with all those problems, another 13 patients or 3 months extended follow up would have shown a statistically significant benefit. This is born out by the disclosed fact that 47% of the patients who received the sham procedure crossed over to have a vertebroplasty done after the study was over (and showed improvement). In fact, the investigator wrote in his introduction that "vertebroplasty is an efficacious treatment"... But, the New England Journal of Medicine made him take that line out in order to publish the piece (this information is according to conversations with the investigator in his office).
The problem isn't with the method, it's with what people do with the methods and how they interpret information. I'm afraid that these types of studies are going to be more and more common in immediate future as people try to figure out ways to decrease the cost of healthcare. Blue Cross Blue Shield is already trying to stop reimbursements for fusion for degenerative disc disease.
-
12-29-2010, 03:00 AM #8
I've had to read my fair share of publications in many fields of science including soft and hard sciences. I could only make it 3/4 of the way through this article and I have to say that there is a reason that there is a rule in my biology department why the New Yorker is not allowed into the building. This was filled with half-truths, misdirections and blatant discounts in many sciences. I also must agree with the flexibility of sciences and the errors reported within corporate funding, but methods cannot be compared between sciences they are rampantly different even within the same science.
-
12-30-2010, 06:40 AM #9
I have many problems with that article, but the main issue is that the author is just setting up a straw man. There are different levels to discuss the 'scientific method' and it's pretty disingenuine to take arguments from a somewhat simplistic framework, and then extrapolate them to draw conclusions about something much more fundamental. If you are going to deal with the fundamental (philosophical) issues you have to address them within the corresponding fundamental framework.
His central conclusion is
Originally Posted by Jonah Lehrer
And to stoop to the author's level, we probably shouldn't discount the journalistic bias towards publishing - it doesn't need to be particularly sound logically, it only has to be what the target readership would like.
I wish he had taken the more straightforward approach and had actually investigated more thoroughly either the philosophy or the practice of science, instead of making wild and unfounded generalizations about both.Last edited by gugi; 12-30-2010 at 06:45 AM. Reason: typo
-
The Following User Says Thank You to gugi For This Useful Post:
AlanII (12-30-2010)
-
08-08-2011, 06:04 AM #10
- Join Date
- Dec 2010
- Location
- Brisbane, Qld, Australia
- Posts
- 378
Thanked: 94