I was going to talk about doctors and how they have some sort of a weird cult status that transcends them beyond mere mortals despite the fact that they are, on average, average people, with no real incentive to be the best in their trade and to stay up to date on the comings and goings of new technology and research, and how people don't behave like customers when it comes to their health (wanting the best possible product), instead delegating all that to their doctor... but I decided against it.
Instead, you get statistics. And if you live in the US, ask your doctors for your uptodate.com analysis. 90% of the academic hospitals have access to it.
90% of preclinical cancer studies could not be replicated:
http://www.nature.com/nature/journal/v483/n7391/full/483531a.html"It is frequently stated that it takes an average of 17 years for research evidence to reach clinical practice. Balas and Bohen, Grant, and Wratschko all estimated a time lag of 17 years measuring different points of the process." -
http://www.jrsm.rsmjournals.com/content/104/12/510.full"The authors estimated the volume of medical literature potentially relevant to primary care published in a month and the time required for physicians trained in medical epidemiology to evaluate it for updating a clinical knowledgebase.... Average time per article was 2.89 minutes, if this outlier was excluded. Extrapolating this estimate to 7,287 articles per month, this effort would require 627.5 hours per month, or about 29 hours per weekday."
One-third of hospital patients are harmed by their stay in the hospital, and 7% of patients are either permanently harmed or die:
http://www.ama-assn.org/amednews/2011/04/18/prl20418.htmStatistical IlliteracyDoctors often confuse sensitivity and specificity (Gigerenzer 2002); most physicians do not understand how to compute the positive predictive value of a test (Hoffrage and Gigerenzer 1998); a third overestimate benefits if they are expressed as positive risk reductions (Gigerenzer et al 2007).
Physicians think a procedure is more effective if the benefits are described as a relative risk reduction rather than as an absolute risk reduction (Naylor et al 1992).
Only 3 out of 140 reviewers of four breast cancer screening proposals noticed that all four were identical proposals with the risks represented differently (Fahey et al 1995).
60% of gynecologists do not understand what the sensitivity and specificity of a test are (Gigerenzer at al 2007).
95% of physicians overestimated the probability of breast cancer given a positive mammogram by an order of magnitude (Eddy 1982).
When physicians receive prostate cancer screening information in terms of five-year survival rates, 78% think screening is effective; when the same information is given in terms of mortality rates, 5% believe it is effective (Wegwarth et al, submitted).
Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test (Bramwell, West, and Salmon 2006).
Sixteen out of twenty HIV counselors said that there was no such thing as a false positive HIV test (Gigerenzer et all 1998).
Only 3% of questions in the certification exam for the American Board of Internal Medicine cover clinical epidemiology or medical statistics, and risk communication is not addressed (Gigerenzer et al 2007).
British GPs rarely change their prescribing patterns and when they do it’s rarely in response to evidence (Armstrong et al 1996).
Drug AdvertisingDirect-to-customer advertising by pharmaceutical companies, which is intended to sell drugs rather than to educate, often does not contain information about a drug's success rate (only 9% did), alternative methods of treatment (29%), behavioral changes (24%), or the treatment duration (9%) (Bell et al 2000).
Patients are more likely to request advertised drugs and doctors to prescribe them, regardless of their misgivings (Gilbody et al 2005).
Medical Errors44,000 to 98,000 patients are killed in US hospitals each year by documented, preventable medical errors (Kohn et al 2000).
Despite proven effectiveness of simple checklists in reducing infections in hospitals (Provonost et al 2006), most ICU physicians do not use them.
Simple diagnostic tools which may even ignore some data give measurably better outcomes in areas such as deciding whether to put a new admission in a coronary care bed (Green and Mehr 1997).
Tort law often actively penalizes physicians who practice evidence-based medicine instead of the medicine that is customary in their area (Monahan 2007).
Out of 175 law schools, only one requires a basic course in statistics or research methods (Faigman 1999), so many judges, jurors, and lawyers are misled by nontransparent statistics.
93% of surgeons, obstreticians, and other health care professionals at high risk for malpractice suits report practicing defensive medicine (Studdert et al 2005).
Regional Variations in Health CareTonsillectomies vary twelvefold between the counties in Vermont with the highest and lowest rates of the procedure (Wennberg and Gittelsohn 1973).
Fivefold variations in one-year survival from cancer across different regions have been observed (Quam and Smith 2005).
Fiftyfold variations in people receiving drug treatment for dementia has been reported (Prescribing Observatory for Mental Health 2007).
Rates of certain surgical procedures vary tenfold to fifteenfold between regions (McPherson et al 1982).
Clinicians are more likely to consult their colleagues than medical journals or the library, partially explaining regional differences (Shaughnessy et al 1994).
ResearchResearchers may report only favorable trials, only report favorable data (Angell 2004), or cherry-pick data to only report favorable variables or subgroups (Rennie 1997).
Of 50 systematic reviews and meta-analyses on asthma treatment 40 had serious or extensive flaws, including all 6 associated with industry (Jadad et al 2000).
Less high-tech knowledge and applications tend to be considered less innovative and ignored (Shi and Singh 2008).
Poor Use of Statistics In ResearchOnly about 7% of major-journal trials report results using transparent statistics (Nuovo, Melnivov and Chang 2002).
Data are often reported in biased ways: for instance, benefits are often reported as relative risks (“reduces the risk by half”) and harms as absolute risks (“an increase of 5 in 1000”); absolute risks seem smaller even when the risk is the same (Gigerenzer et al 2007).
Half of trials inappropriately use significance tests for baseline comparison; 2/3 present subgroup findings, a sign of possible data fishing, often without appropriate tests for interaction (Assman et al 2000).
One third of studies use mismatched framing, where benefits are reported one way (usually relative risk reduction, which makes them look bigger) and harms another (usually absolute risk reduction, which makes them look smaller) (Sedrakyan and Shih 2007).
Positive Publication BiasPositive publication bias overstates the effects of treatment by up to one-third (Schultz et al 1995).
More than 50% of research is unpublished or unreported (Mathieu et al 2009).
In ten high-impact medical journals, only 45.5% of trials were adequately registered before testing began; of these 31% show discrepancies between outcomes measured and published (Mathieu et al 2009).
Pharmaceutical Company Induced BiasStudies funded by the pharmaceutical industry are more likely to report results favorable to the sponsoring company (Lexchin et al 2003).
There is a significant association between industry sponsorship and both pro-industry outcomes and poor methodology (Bekelman and Kronmal 2008).
In manufacturer-supported trials of non-steroidal anti-inflammatory drugs, half the time the data presented did not match claims made within the article (Rochon et al 1994).
68% of US health research is funded by industry (Research!America 2008), which means that research that leads to profits to the health care industry tends to be prioritized.
71 out of 78 drugs approved by the FDA in 2002 are “me too” drugs that are more profitable because of the patent but not substantially different from existing medication (Angell 2004).
“Seeding trials” by pharmaceutical companies promote treatments instead of testing hypotheses (Hill et al 2008).
Even accurate research may be misreported by pharmaceutical company advertising, including ads in medical journals (Villanueva et al 2003).
In 92% of cases, pharmaceutical leaflets distributed to doctors have data summaries that either cannot be verified or inaccurately summarize available data (Kaiser et al 2004).
I hate how if a girl sleeps with a bunch of different guys she's called a slut, but if a guy does the exact same thing, he's called "gay."