Advertisement

The evidence in evidence-based practice: What counts and what doesn't count?

      Introduction

      Evidence-based practice (EBP), also known as evidence-based medicine, is now recognized as being of fundamental importance in the delivery of high-quality and effective health care in the modern world. It is defined as clinical decision-making based on (1) sound external research evidence combined with individual clinical expertise
      • Sackett DL
      • Rosenberg WMC
      • Gray JAM
      • Haynes RB
      • Richardson WS
      Evidence-based medicine: what it is and what it isn't.
      and (2) the needs of the individual patient. More and more, EBP is seen as a consensus between all stakeholders, and clinical guidelines are seen as facilitative rather than prescriptive in informing practice.
      That EBP makes sense is almost intuitive. No longer is a history of usage a guarantee of therapeutic value.
      • Anderson R
      • Meeker WC
      • Mootz RD.
      A meta-analysis of clinical trials of spinal manipulation.
      Today, evidence provided by well-conducted research studies is required to demonstrate “what works best for whom.” It is also required to determine the costs involved and the benefit-risk ratios.
      • Sheldon TA
      • Guyatt GH
      • Haines A.
      When to act on the evidence.
      Many health care professions in addition to medicine have joined the EBP movement and are currently carrying out trials regarding the efficacy and effectiveness of their treatment interventions.
      • Bithell C.
      Evidence-based physiotherapy.
      For some in the chiropractic profession, this is unarguably the way forward
      • Delaney PM
      • Fernandez CE.
      Toward an evidence-based model for chiropractic education and practice.
      ; for others, it is not. An article in the Christmas (2000) issue of the British Medical Journal can be seen as a timely warning to complementary medicine practitioners of what might happen if they do not adopt the EBP approach.
      • Haynes RB
      A warning to complementary medicine practitioners: get empirical or else [commentary].
      For those who are prepared to buy in to EBP, there is a question that is only now being debated in the chiropractic literature.
      • Hawk C.
      Chiropractic clinical research: where are we looking for the key?.
      This question, which is being vigorously debated elsewhere in EBP,
      • Miles A
      • Polychronis A
      • Grey J.
      Evidence-based medicine: why all the fuss? This is why.
      • Fahey T.
      Applying the results of clinical trials to patients in general practice: perceived problems, strengths, assumptions, and challenges for the future.
      • McKee M
      • Britton A
      • Black N
      • McPherson K
      • Sanderson C
      • Bain C.
      Interpreting the evidence: choosing between randomized and non-randomized studies.
      is one of exactly what does and what does not count as evidence in EBP. In the working definition of EBP, the “evidence” is characterized as being “sound” and generated from “well-conducted research.” But what exactly does this mean? For many, the terms sound and well-conducted research are instinctively interpreted as referring to randomized controlled trials (RCTs). The RCT has been designated—in many cases accurately—the gold standard of research designs. Accordingly, the intuitive assumption that only evidence from RCTs counts in EBP is understandable. However, this position is now being challenged, and other designs, such as observational and qualitative research, are being considered legitimate providers of the evidence in EBP. It might be time to look at how these moves will affect chiropractic research in the future.
      Exactly what is it about the RCT that makes it supreme in a hierarchy of research designs? The RCT is rightly accorded this position in quantitative research designs because with respect to internal validity, it is unsurpassed by any other design. The virtues of the RCT are specified in its name: it is controlled, and it is randomized. If a difference in outcome between the test and control groups is detected, it is because of randomized allocation and the use of a control (or matched test) group that cause and effect can be inferred (but not proved). In addition to causality, randomization reduces systematic bias in the data (leading to either an overestimate or an underestimate of the treatment effect) that can occur as a result of biased allocation to groups or differences in (known and unknown) prognostic factors at baseline. Intention-to-treat analysis, the hallmark of a good RCT, further reduces the possibility of a biased outcome, as does double blinding. All of these features of the RCT ensure that the study's findings are valid for those patients included in the investigation.
      But what about those patients not included in the investigation? For taking on the ability to determine cause and effect and reduce bias in the data, the RCT has in some cases paid a price by becoming more and more divorced from the real world. In other words, although an RCT may be internally valid, this does not necessarily mean that it is also externally valid. Where the RCT has been most successful is in those cases in which both internal and external validity are high—eg, in situations dealing with homogeneous systems such as the evaluation of pharmaceutical treatment interventions. Here the treatment is homogeneous and does not depend to any significant extent on the clinical encounter—or for that matter, on the doctor administering the drug. The condition being treated is also homogeneous, in the sense that there is often an identifiable pathologic lesion. The placebo effect can be controlled for by the use of a sham treatment. Double blinding is possible. Outcome measures are commonly objective, and clinically important effects are known. In the case of pharmaceutical interventions, there is little argument that what counts in EBP is evidence from well-conducted RCTs.
      But what about the evaluation of treatment interventions that, in contrast, are messy and complex? What if the treatment is more than the intervention per se, the clinical encounter and setting, the placebo, and the characteristics of the individual practitioner being significant parts of the “package”? What if the condition being treated is also complex, lacks a discernible pathology, and can be explained better by a biopsychosocial model than by a biomedical one? What if double blinding, or for that matter single blinding, is not possible? What if the outcomes are subjective and clinically important differences still a matter for debate? When evaluations of treatment interventions in this arena are carried out, is the RCT, which is grounded in scientific purity, still the best design?
      The first chiropractic research agenda, set in the 1970s, focused on the RCT as the means of evaluating the efficacy of chiropractic interventions.
      • Hawk C.
      Chiropractic clinical research: where are we looking for the key?.
      Since then, there has been considerable debate about so-called fastidious trials and pragmatic trials. Pragmatic trials, which concede the heterogeneous nature of the chiropractic intervention, might offer the most realistic way forward for RCTs evaluating chiropractic treatment in the future. The trial comparing chiropractic with hospital outpatient management in patients with low back pain was one of the first pragmatic RCTs to be carried out.
      • Meade TW
      • Dyer S
      • Browne W
      • Townsend J
      • Frank AO.
      Low back pain of mechanical origin: randomized comparison of chiropractic and hospital outpatient treatment.
      The ramifications of this trial were considerable for the chiropractic profession in the United Kingdom, and it remains the case that RCTs are of enormous political importance to chiropractic. The fact that the findings from RCTs are very convincing in a system that for the most part still accepts the RCT as the best way of providing evidence in all treatment interventions means that the RCT remains an essential design in chiropractic research.

      Qualitative research

      Given the limitations of the RCT in evaluating chiropractic interventions,
      • Hawk C.
      Chiropractic clinical research: where are we looking for the key?.
      however, it does not make sense to exclusively pursue the RCT in the future.
      • Bolton JE.
      Whence the evidence in evidence-based practice?.
      Other research designs are coming of age, and the limitations of the RCT, now recognized by other health professions (including medicine), have propelled these into the limelight. Qualitative research, considered by many not so long ago to be anecdotal and not a rigorous form of scientific inquiry, is now increasingly recognized as a very meaningful way of providing the evidence in evidence-based medicine.
      • Green J
      • Britten N.
      Qualitative research and evidence-based medicine.
      The British Medical Journal has recently published a number of studies in which qualitative research designs were used.
      • Donovan JL
      • Blake DR
      Qualitative study of interpretation of reassurance among patients attending rheumatology clinics: “Just a touch of arthritis, Doctor?”.
      • Barry CA
      • Bradley CP
      • Britten N
      • Stevenson FA
      • Barber N
      Patients' unvoiced agendas in general practice consultations: qualitative study.
      Qualitative research, for the most part, works in the interpretivist paradigm (wherein variables cannot be isolated as they are in the reductionist approach).
      This design purposively observes complexity and interaction in context as opposed to isolated parts. Its research methods adopt a phenomenologic approach that looks at the lived experiences, behaviors, and actions of patients in the everyday context. For example, RCTs might provide evidence of the efficacy of rehabilitation interventions under “ideal conditions.” However, unless qualitative approaches are also used to understand patients' perceptions of exercises and their motivations to comply, the evidence to inform clinical practice from RCTs alone might not be enough.
      Another example of how the human dimension, through qualitative research, is being included in medical research is the re-emergence of the case history.
      • Charlton BG
      • Walston F.
      Individual case studies in clinical research.
      Again, once largely considered anecdotal and unscientific, the case report is making a comeback.
      • Maisonneuve H
      • Ojasoo T.
      From the life cycles of clinical evidence to the learning curve of clinical experience.
      The case report deals with the individual patient, the clinical encounter, and the experience of the individual clinician. The case report is useful for identifying research questions in everyday clinical practice. Recently, a new breed of case report has begun to appear in the British Medical Journal.
      • Samanta A
      • Beardsley J.
      Sciatica: which intervention?.
      • Samanta A
      • Beardsley J.
      Low back pain: which is the best way forward?.
      These are “evidence-based” case reports that show how individual cases are dealt with through use of published research findings.
      • Godlee F.
      Applying research evidence to individual patients. Evidence based case reports will help.
      Practitioners, who are ideally placed to take advantage of this new approach to the case report, submit well-documented accounts of how the research evidence has been applied in practice to inform the management of an individual patient. This is, after all, the ultimate goal of EBP.

      Outcomes research

      Moving the research agenda to where it matters most has been the underlying force in the upsurge of yet another research design that for many years was neglected in the face of the unquestioning demand for the RCT. This is outcomes research, which systematically collects quantitative data using observational methods in the practice-based setting.
      • Epstein AM
      The outcomes movement—will it get us where we want to go?.
      • Delamothe T.
      Using outcomes research in clinical practice.
      • Black N.
      Why we need observational studies to evaluate the effectiveness of health care.
      • Davies HTO
      • Crombie IK.
      Interpreting health outcomes.
      • Hoiriis KT
      • Owens EF
      • Pfleger B.
      Changes in general health status during upper cervical chiropractic care: a practice-based research project.
      • Wilson NHF
      • Mjor I.
      Practice-based research: importance, challenges and prospects. A personal view.
      • Hawk C
      • Long CR
      • Boulanger K.
      Development of a practice-based research program.
      • Kernick D
      • Stead J
      • Dixon M.
      Moving the research agenda to where it matters.
      Outcomes research is so named because of its central focus on measuring outcomes in everyday settings that are relevant and meaningful to patients' lives.
      • Long AF
      • Dixon P.
      Monitoring outcomes in routine practice: defining appropriate measurement criteria.
      • Turk DC.
      Editorial: Here we go again: outcomes, outcomes, outcomes.
      In other words, outcomes research has deliberately moved away from the scientific purity of RCTs to the practical, messy, and complex conditions of everyday clinical practice.
      • Rosenfeld RM.
      Meaningful outcomes research.
      Outcomes research is a bottom-up approach that has relevance to what actually happens in practice—in contrast to the well-intentioned but top-down research in rigorously controlled conditions that practitioners might be reluctant to accept. In the outcomes approach, the practical realities are not seen as confounding variables that must be stripped away. Instead, the approach is flexible and accommodating, but at the same time systematic and subject to the same laws of statistics and epidemiology as the RCT.
      • Rosenfeld RM.
      Meaningful outcomes research.
      A table contrasting the strengths and limitations of the RCT and the outcomes study is given by Rosenfeld.
      • Rosenfeld RM.
      Meaningful outcomes research.
      This distinction between research in the research-based setting and research in the practice-based setting is illustrated by the use of the terms efficacy and effectiveness with regard to treatment interventions. The research question remains the same for both: What works? The efficacy of an intervention is what works under “ideal and controlled” conditions, whereas the effectiveness of an intervention is what works under “everyday and real-life” conditions.
      • Pittler MH
      • White AR.
      It is clear that for the question of efficacy, the RCT is unarguably the best design; however, given that randomization and controlled conditions play no part in everyday clinical practice, evidence for effectiveness cannot be accrued through use of the same design.
      In outcomes research (or practice-based research, as it is sometimes known), not only is the research setting pragmatic; so also is the data collection. Hence, data are systematically collected during treatment in a way that is least disruptive to the routine clinical setting.
      • Isenberg SF
      • Rosenfeld RM.
      Perhaps the biggest concession of outcomes research is in the focus on patient-centered outcomes, such as symptoms, function, quality of life, and satisfaction.
      • Deyo RA
      • Battie M
      • Beurskens AJHM
      • Bombardier C
      • Croft P
      • Koes B
      • et al.
      Outcome measures for low back pain research.
      These are outcomes that matter to patients, in contrast to physiologic outcomes that are not directly relevant to the patients in their everyday lives. This surge in outcomes research (the so-called outcomes movement) might be very opportune for the chiropractic profession. Chiropractors know that what they do in practice “works.” Outcomes research is now urgently needed to provide evidence of this.
      As with any research, outcomes research will be useful to the profession only if it is done well. Poor research can do as much to retard the profession as good research can do to promote it. Outcomes research has limitations, and any conclusions must be supported by the data collected.
      Outcomes research is almost certainly more at risk of producing meaningless data than is the RCT. This can be avoided, however, through the use of properly constructed study protocols, systematic methods of data collection, consecutive (not convenience) patient samples, robust outcome measures, comprehensive follow-ups, complete data sets, and conclusions that are supported by the data as well as mindful of the limitations of the design.
      Undoubtedly, the major limitation of the outcomes study lies in the design's internal validity. This is compromised because there is no randomized allocation to groups and, in some cases, because there is no control or comparison group. The presence or absence of a control group has a profound effect on the interpretation of the data. An uncontrolled study is purely descriptive, and as such cannot distinguish treatment effects from the natural history of the condition. Therefore, whereas the RCT can determine cause and effect, an uncontrolled outcomes study cannot. Another limitation in the internal validity of an outcomes study lies in the biases inherent in the way in which data are collected. Thus, selection and allocation bias, detection bias, and transfer bias, all of which are reduced or eliminated in the RCT, limit the conclusions that can be drawn.
      The focus of outcomes research on outcomes that are important to patients—the so-called subjective measures—has been debated over recent years.
      • Jette AM.
      Outcomes research: shifting the dominant research paradigm in physical therapy.
      • Bolton JE.
      Future directions for outcomes research in back pain.
      Although physiologic “objective” recordings, such as measurements of range of motion and proprioceptive function, remain important outcomes in basic and experimental research, they will inevitably become less important in clinical research that asks “What works? For whom?” and “When?” Like “objective” measures, patient-centered outcomes are measurable in the sense that they can be quantified and subjected to statistical analysis. As a result, the measures used in outcomes studies must meet the same stringent psychometric criteria as objective measures and be valid, reliable, and responsive to change.
      • Bolton JE.
      On the responsiveness of evaluative measures.
      In the selection of outcome measures for use in routine practice, they must be not only patient-centered and psychometrically robust but also practical for use in the busy clinical setting.
      • Greenhalgh J
      • Long AF
      • Brettle AJ
      • Grant MJ.
      Reviewing and selecting outcome measures for use in routine clinical practice.
      This means that they should be short so that patients can complete them in a few minutes. For patients with musculoskeletal conditions, an outcome measure for use in routine practice has recently been developed. Having been validated in patients with back pain
      • Bolton JE
      • Breen AC.
      The Bournemouth Questionnaire: a short form comprehensive outcome measure, I. Psychometric properties in back pain patients.
      and patients with neck pain (unpublished observations), it is currently being introduced into field practices in a computerized format so that data are automatically downloaded into large databases. In addition to providing data for outcomes research, the software allows practitioners to access and monitor individual patients' progress over time.
      So how can the data from outcomes research be used to provide the evidence in EBP? There are 2 approaches. The first is the systematic documentation of patient outcomes in single cohorts of patients treated in practice. Because such studies lack control or comparison groups, they cannot provide evidence of treatments' effectiveness. However, they can be used to answer a number of important research questions. These include investigating the feasibility of collecting outcomes in the busy routine clinical setting and determining clinically important differences in patient-centered outcomes. Findings from these studies will thus help inform larger effectiveness studies and RCTs. Using multiple regression analysis, single cohort studies can be used to identify variables that predict outcome, in both the short term and the long term. The enormous potential of identifying predictor variables in the prevention of chronicity in musculoskeletal conditions will mean that cost-efficient designs such as the single cohort outcome study that can monitor outcomes over long periods will become increasingly important.
      The second approach in outcomes research is studies that investigate the effectiveness of a treatment intervention. Effectiveness can be investigated only when control or comparison groups are used. In the routine practice-based setting it is unlikely that control groups will be viable. Therefore, comparative studies, either within or between specialities, can be implemented to answer the question of what works best for particular conditions in the real-life setting. Provided that multivariate and comorbidity analyses are used to reduce the effects of confounding variables and allocation bias (the beneficial effect caused by allocating subjects with less severe conditions or better prognostic factors to a group), valid and useful data can be obtained to inform the practitioner and patient of the best available treatment.

      Conclusion

      EBP, in which decisions about health care are based, at least in part, on the findings from clinical research, is now well established. If chiropractic wishes to be part of this movement, then it will need to identify clinically relevant questions, establish the best methods of providing the answers to those questions, and determine how the answers can best be implemented in practice to improve care for patients. How clinicians change their behavior and actually implement research findings into practice is now the subject of hot debate in the medical literature.
      • Djulbegovic B
      • Morris L
      • Lyman GH.
      Evidentiary challenges to evidence-based medicine.
      • Miles A
      • Charlton B
      • Bentley P
      • Polychronis A
      • Grey J
      • Price N.
      New perspectives in the evidence-based healthcare debate.
      • Oswald N
      • Batemen H.
      Treating individuals according to evidence: why do primary care practitioners do what they do?.
      The reason for the hesitation on the part of many practitioners to commit themselves to EBP might be that much of the evidence to date has been generated from “ivory-tower” research that bears little resemblance to what happens in everyday practice. This article argues the case for the use of research methods that generate evidence from the routine practice-based setting—in particular, the use of outcomes research. To exclusively evaluate chiropractic treatment through use of the RCT would not only be restrictive; paradoxically, it might also generate invalid evidence. A change is thus required in the way evidence is defined in EBP. What counts and what does not count as evidence in the evaluation of the chiropractic intervention is now a matter for debate. I believe that the evidence net should be cast wide to embrace a number of research methods. The “best” answer, after all, is one in which different methods and lines of evidence point in the same direction. The most important issue is not what methods are used but whether they are used with foresight and in a manner appropriate to the job in hand.

      References

        • Sackett DL
        • Rosenberg WMC
        • Gray JAM
        • Haynes RB
        • Richardson WS
        Evidence-based medicine: what it is and what it isn't.
        Br Med J. 1996; 312: 71-72
        • Anderson R
        • Meeker WC
        • Mootz RD.
        A meta-analysis of clinical trials of spinal manipulation.
        J Manipulative Physiol Ther. 1992; 15: 430-438
        • Sheldon TA
        • Guyatt GH
        • Haines A.
        When to act on the evidence.
        Br Med J. 1998; 317: 139-142
        • Bithell C.
        Evidence-based physiotherapy.
        Physiotherapy. 2000; 86: 58-60
        • Delaney PM
        • Fernandez CE.
        Toward an evidence-based model for chiropractic education and practice.
        J Manipulative Physiol Ther. 1999; 22: 114-118
        • Haynes RB
        A warning to complementary medicine practitioners: get empirical or else [commentary].
        Br Med J. 1999; 319: 1632
        • Hawk C.
        Chiropractic clinical research: where are we looking for the key?.
        J Neuromusculoskeletal System. 1999; 7: 150-155
        • Miles A
        • Polychronis A
        • Grey J.
        Evidence-based medicine: why all the fuss? This is why.
        J Eval Clin Pract. 1997; 3: 83-86
        • Fahey T.
        Applying the results of clinical trials to patients in general practice: perceived problems, strengths, assumptions, and challenges for the future.
        Br J Gen Pract. 1998; 1: 1173-1178
        • McKee M
        • Britton A
        • Black N
        • McPherson K
        • Sanderson C
        • Bain C.
        Interpreting the evidence: choosing between randomized and non-randomized studies.
        BMJ. 1999; 319: 312-315
        • Meade TW
        • Dyer S
        • Browne W
        • Townsend J
        • Frank AO.
        Low back pain of mechanical origin: randomized comparison of chiropractic and hospital outpatient treatment.
        BMJ. 1990; 300: 1431-1437
        • Bolton JE.
        Whence the evidence in evidence-based practice?.
        Br J Chiropr. 2000; 4: 2-3
        • Green J
        • Britten N.
        Qualitative research and evidence-based medicine.
        BMJ. 1998; 316: 1230-1232
        • Donovan JL
        • Blake DR
        Qualitative study of interpretation of reassurance among patients attending rheumatology clinics: “Just a touch of arthritis, Doctor?”.
        BMJ. 2000; 320: 541-544
        • Barry CA
        • Bradley CP
        • Britten N
        • Stevenson FA
        • Barber N
        Patients' unvoiced agendas in general practice consultations: qualitative study.
        BMJ. 2000; 320: 1246-1250
      1. Handbook of qualitative research. Sage, Thousand Oaks (CA)1994: 536
        • Charlton BG
        • Walston F.
        Individual case studies in clinical research.
        J Eval Clin Pract. 1998; 4: 147-155
        • Maisonneuve H
        • Ojasoo T.
        From the life cycles of clinical evidence to the learning curve of clinical experience.
        J Eval Clin Pract. 1999; 5: 417-421
        • Samanta A
        • Beardsley J.
        Sciatica: which intervention?.
        BMJ. 1999; 319: 302-303
        • Samanta A
        • Beardsley J.
        Low back pain: which is the best way forward?.
        BMJ. 1999; 318: 1122-1123
        • Godlee F.
        Applying research evidence to individual patients. Evidence based case reports will help.
        BMJ. 1998; 316: 1621-1622
        • Epstein AM
        The outcomes movement—will it get us where we want to go?.
        N Engl J Med. 1990; 323: 266-269
        • Delamothe T.
        Using outcomes research in clinical practice.
        BMJ. 1994; 308: 1583-1584
        • Black N.
        Why we need observational studies to evaluate the effectiveness of health care.
        BMJ. 1996; 312: 1215-1218
        • Davies HTO
        • Crombie IK.
        Interpreting health outcomes.
        J Eval Clin Pract. 1997; 3: 187-199
        • Hoiriis KT
        • Owens EF
        • Pfleger B.
        Changes in general health status during upper cervical chiropractic care: a practice-based research project.
        Chiropr Res J. 1997; 4: 18-25
        • Wilson NHF
        • Mjor I.
        Practice-based research: importance, challenges and prospects. A personal view.
        Primary Dental Care. Jan 1997;
        • Hawk C
        • Long CR
        • Boulanger K.
        Development of a practice-based research program.
        J Manipulative Physiol Ther. 1998; 21: 149-156
        • Kernick D
        • Stead J
        • Dixon M.
        Moving the research agenda to where it matters.
        BMJ. 1999; 319: 206-207
        • Long AF
        • Dixon P.
        Monitoring outcomes in routine practice: defining appropriate measurement criteria.
        J Eval Clin Pract. 1996; 2: 71-78
        • Turk DC.
        Editorial: Here we go again: outcomes, outcomes, outcomes.
        Clin J Pain. 1999; 15: 241-243
        • Rosenfeld RM.
        Meaningful outcomes research.
        in: Managed care, outcomes and quality. A practical guide. Thieme, New York1998: 99-115
        • Pittler MH
        • White AR.
        Efficacy and effectiveness. Focus on Alternative and Complementary Therapies. 4. 1999
        • Isenberg SF
        • Rosenfeld RM.
        Problems and pitfalls in community-based outcomes research. Otolaryngol Head Neck Surg. 116. 1997
        • Deyo RA
        • Battie M
        • Beurskens AJHM
        • Bombardier C
        • Croft P
        • Koes B
        • et al.
        Outcome measures for low back pain research.
        Spine. 1998; 23: 2003-2013
        • Jette AM.
        Outcomes research: shifting the dominant research paradigm in physical therapy.
        Phys Ther. 1995; 75: 965-970
        • Bolton JE.
        Future directions for outcomes research in back pain.
        Eur J Chiropr. 1997; 45: 57-64
        • Bolton JE.
        On the responsiveness of evaluative measures.
        Eur J Chiropr. 1997; 45: 5-8
        • Greenhalgh J
        • Long AF
        • Brettle AJ
        • Grant MJ.
        Reviewing and selecting outcome measures for use in routine clinical practice.
        J Eval Clin Pract. 1998; 4: 339-350
        • Bolton JE
        • Breen AC.
        The Bournemouth Questionnaire: a short form comprehensive outcome measure, I. Psychometric properties in back pain patients.
        J Manipulative Physiol Ther. 1999; 22: 503-510
        • Djulbegovic B
        • Morris L
        • Lyman GH.
        Evidentiary challenges to evidence-based medicine.
        J Eval Clin Pract. 2000; 6: 99-109
        • Miles A
        • Charlton B
        • Bentley P
        • Polychronis A
        • Grey J
        • Price N.
        New perspectives in the evidence-based healthcare debate.
        J Eval Clin Pract. 2000; 6: 77-84
        • Oswald N
        • Batemen H.
        Treating individuals according to evidence: why do primary care practitioners do what they do?.
        J Eval Clin Pract. 2000; 6: 139-148