Lest you think my thoughts on modern "science" are simply hogwash I offer this article: "Analytical Trend Trouble Scientists."
I am not the only one that thinks making random observations and calling the result science troubling.
At issue in the article are "observational studies." These are studies where science does analysis of "published data." Most likely data published for some other study and most likely data to which we have only the reported "results" and not the actual underlying data collected.
The real question boils down to this: if I give two scientific teams the same data, in the case of the article U.K. cancer cases versus the use of a drug, should I expect the same answers with regard to whether or not the drug creates a significant increase in cancer.
You might think that there are only two answers here: yes or no.
But there are in fact more. For one thing it may be that the data shows no meaningful association between the drug's use and cancer. This could be for any number of reasons: not enough data, not enough statistical significance, i.e., a result not really different than "noise" in the data, bad data, and so on.
So legitimate scientific results include "bad data", "no statistically significant results", and so on.
What the article shows us is this:
1) Given the same sets of data two groups of scientists produce opposing results: drug causes cancer, drug does not cause cancer.
2) The "difference" between the two is "methodology." This means how did I look at the data.
3) Correlating this with other posts and in particular the ones where I discuss things like "Falsified Medical Studies the Norm" you have to also believe that sheer incompetence may also be involved.
Now the problem is what do people use these studies for?
Well, for one thing, to make policy regarding drugs: is this drug good for the baby or not? will this drug give grandma cancer? and on and on...
And this is not just for drugs. These types of studies are now used for climate change, nutrition, psychology and environmental science.
In the decade 1990 - 2000 some 80,000 "observational studies" were conducted. In the decade 2000 - 2010 that number increased threefold according to the article to 263,000.
A number of people analyzing and monitoring this published research point out that at best 1 in 5 of these studies produces results that someone else can replicate. At worst, 1 in 20.
So each year our science effort here in the use produces some 26,000 research reports of which at most 5,200 are actually accurate. Worse, it may be only as many 1,300 that contain valid science.
So what does this say about US as a country?
What does this say about our educational system?
It says that, particularly in the area of public medical policy where these kinds of studies are common, we are flying totally blind with perhaps 5% of the information our finest scientific minds actually being correct.
Imagine if your steering wheel on your car worked correctly with the same rate of reliability (or let's be more generous and say 20% of the time).
Would you drive?
What if your child's vaccinations had this level of reliability, viability or safety?
Would you vaccinate your child?
The problem here is that this is what I will call "public policy science." "Science" where the outcome is basically predetermined by malice, incompetence or bias so that a point can be made: yes the earth is dying because of man: see our study proves it.
The problem is is that no one bothers these days to attempt to reproduce these kinds of results.
Yet the results make there way into the news, your cell phone, etc. and they become like urban legends.
Everyone knows X is happening where its global climate change or cholesterol in your arteries.
So therefore it MUST be true.
Even if its not.
A big part of this is that medicine (not the hard science kind, e.g., biochemistry, etc.) is like sociology and environmental science and many more. Its not a "cause and effect" or "law" based process.
In chemistry either the chemical reactions occur or they don't, either the result is electrically conductive or its not, and so on. Similarly for physics.
In these hard sciences people often build on the work of others so that if a result does not work or cannot be reproduced its quickly found out.
This is not true in the "social science" world.
If I write a paper showing that "more children are starving" or "more children are dying of cancer" because of X then public funds flow toward "fixing" that end. Fixing involves more study, study of the fixes, studies of the studies of the fixes (as described in the WSJ article). TV ads appear with starving children. Everyone "knows" the children are starving and so more money pours in.
This is not science.
Its the wolf (in the form of lies due to deviousness or incompetence) preying on the stupid.
Yet because it is called "science" people don't question it - mostly because they A) agree with the result and B) are fearful of any repercussion of a disagreement.
These types of studies abound in the US Medicare system - and look what its doing to our budget.
But until people wake up and see what's going on it will not stop.
I am not the only one that thinks making random observations and calling the result science troubling.
At issue in the article are "observational studies." These are studies where science does analysis of "published data." Most likely data published for some other study and most likely data to which we have only the reported "results" and not the actual underlying data collected.
The real question boils down to this: if I give two scientific teams the same data, in the case of the article U.K. cancer cases versus the use of a drug, should I expect the same answers with regard to whether or not the drug creates a significant increase in cancer.
You might think that there are only two answers here: yes or no.
But there are in fact more. For one thing it may be that the data shows no meaningful association between the drug's use and cancer. This could be for any number of reasons: not enough data, not enough statistical significance, i.e., a result not really different than "noise" in the data, bad data, and so on.
So legitimate scientific results include "bad data", "no statistically significant results", and so on.
What the article shows us is this:
1) Given the same sets of data two groups of scientists produce opposing results: drug causes cancer, drug does not cause cancer.
2) The "difference" between the two is "methodology." This means how did I look at the data.
3) Correlating this with other posts and in particular the ones where I discuss things like "Falsified Medical Studies the Norm" you have to also believe that sheer incompetence may also be involved.
Now the problem is what do people use these studies for?
Well, for one thing, to make policy regarding drugs: is this drug good for the baby or not? will this drug give grandma cancer? and on and on...
And this is not just for drugs. These types of studies are now used for climate change, nutrition, psychology and environmental science.
In the decade 1990 - 2000 some 80,000 "observational studies" were conducted. In the decade 2000 - 2010 that number increased threefold according to the article to 263,000.
A number of people analyzing and monitoring this published research point out that at best 1 in 5 of these studies produces results that someone else can replicate. At worst, 1 in 20.
So each year our science effort here in the use produces some 26,000 research reports of which at most 5,200 are actually accurate. Worse, it may be only as many 1,300 that contain valid science.
So what does this say about US as a country?
What does this say about our educational system?
It says that, particularly in the area of public medical policy where these kinds of studies are common, we are flying totally blind with perhaps 5% of the information our finest scientific minds actually being correct.
Imagine if your steering wheel on your car worked correctly with the same rate of reliability (or let's be more generous and say 20% of the time).
Would you drive?
What if your child's vaccinations had this level of reliability, viability or safety?
Would you vaccinate your child?
The problem here is that this is what I will call "public policy science." "Science" where the outcome is basically predetermined by malice, incompetence or bias so that a point can be made: yes the earth is dying because of man: see our study proves it.
The problem is is that no one bothers these days to attempt to reproduce these kinds of results.
Yet the results make there way into the news, your cell phone, etc. and they become like urban legends.
Everyone knows X is happening where its global climate change or cholesterol in your arteries.
So therefore it MUST be true.
Even if its not.
A big part of this is that medicine (not the hard science kind, e.g., biochemistry, etc.) is like sociology and environmental science and many more. Its not a "cause and effect" or "law" based process.
In chemistry either the chemical reactions occur or they don't, either the result is electrically conductive or its not, and so on. Similarly for physics.
In these hard sciences people often build on the work of others so that if a result does not work or cannot be reproduced its quickly found out.
This is not true in the "social science" world.
If I write a paper showing that "more children are starving" or "more children are dying of cancer" because of X then public funds flow toward "fixing" that end. Fixing involves more study, study of the fixes, studies of the studies of the fixes (as described in the WSJ article). TV ads appear with starving children. Everyone "knows" the children are starving and so more money pours in.
This is not science.
Its the wolf (in the form of lies due to deviousness or incompetence) preying on the stupid.
Yet because it is called "science" people don't question it - mostly because they A) agree with the result and B) are fearful of any repercussion of a disagreement.
These types of studies abound in the US Medicare system - and look what its doing to our budget.
But until people wake up and see what's going on it will not stop.
No comments:
Post a Comment