View Single Post
  #1  
Old 09-20-2006, 05:43 PM
annapurna annapurna is offline
Senior Member
 
Join Date: Dec 2004
Posts: 1,676
Default

There seemed to me to be an increasing desire to dig up and quote studies to support or refute various positions on the board and to do so without a full understanding of the technical merit of those papers. While neither Laura nor I are doctors or any form of medical professional, a large fraction of our daily job is sifting through an enormous amount of data on various problems to do with Space Shuttle work and picking the pertainent information. This daily effort has given us far too much experience with removing the useless BS from someone's argument and picking out the (far too often) too few nuggets of truth. With that in mind, I'd suggest the following guidelines for questions to answer before you decide how much to trust a paper.

1) Who paid for the research and expense of publishing the paper? Often finding out who paid for the paper can tell you whether or not you should expect biased research or incomplete and slanted analysis. I'm not indicting specific manufacturers with this. Everyone pays for and publishes papers that benefit them. Some go to an extreme and accept poor techniques and poor analysis to prove a point, others simply do research as pure and untainted as possible but only publish the papers that defend their positions.

2) The same question should be asked about who did the work. There are some cases where the researcher isn't directly connected with the payer. It's rare but it does happen. Understand the researcher's bent as well as the payer's.

3) Where was the paper presented? Some places where the paper might be presented are carefully reviewed by other researchers for technical content, some receive a cursory review, and some papers are published without review. Some conferences permit longer (20 minute) talks with questions, other permit short (5 minutes) with limited questions. Ideally, the best would be a peer-reviewed paper presented with extensive questions from the audience, but even then basic mistakes might not be caught. I say this with assurance as I've seen papers in my field get through the system with logic errors and poor testing technique.

4) A second part of the previous question is whether or not you are reading the entire paper, an abstract, an extensive but summarized presentation, or a short presentation. Obviously, all of the details supporting the conclusions tend to be the first thing cut out in summarizing the work.

5) To begin analyzing the actual work done, the first question is to ask what the original question being answered was intended to be. If the researcher is lucky, he/she managed to answer the question in its original form. Typically, the data gained didn't quite manage to answer the original question completely and just the supported conclusions were answered. If you read through the paper, try to get a feel for what the original intent was. That gives you an idea of what kind of data were gathered and what kinds were probably omitted from the research. Really, the thing to look for is a study that was biased from the start, intentionally or unintentionally, because the researcher didn't gather the right kinds of data to really answer the question he/she had in mind.

6) Similarly, you should understand what the actual work done was. Did the research gather data from other papers and present a summary article? Was it a clinical study? Was it a lab-based test? If the researcher did a lab-based test or clinical trial, does the work done make sense to you as the kind of work you'd do to answer the question they had in mind? Did the researcher producing a summary article include all of the papers on a subject or did he/she exclude certain kinds of papers? Does that bias taint the conclusion they came to?

7) Abuse of statistical sample size is rampant. Basically, this just needs to pass the gut-feel test. Did the researcher test enough cases to compare with the entire body of potential people who would fall in that category? A test of 10 ADR patients doesn't work when you wish to make a generalized statement about all ADR patients. A test of a single patient is sufficient to say that an observed condition CAN happen but it takes lots of people to say how LIKELY something may be.

8) Statistical analysis of the data is often abused to justify conclusions. A detailed discussion of what to look for is beyond my knowledge but, in general, better papers should have a discussion of what kind of statistical treatment was used to reach a conclusion. When the test involves real people, you can expect so much variability from person to person, the conclusion should either be so blindingly obvious from the data or supported by so many people tested that arguments aren't likely. If you see small sample sizes (few people) or complex conclusions, be skeptical.
__________________
Laura - L5S1 Charitee
C5/6 and 6/7 Prodisc C
Facet problems L4-S1
General joint hypermobility

Jim - C4/5, C5/6, L4/5 disk bulges and facet damage, L4/5 disk tears, currently using regenerative medicine to address

"There are many Annapurnas in the lives of men" Maurice Herzog
Reply With Quote