Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

Examines in Physical Medicine & Rehabilitation

The Interpretation of Evidence Based Practice

Abulkhair M Beatti*

Armed Forces Centre for Health Rehabilitation, Saudi Arabia

*Corresponding author: Abulkhair M Beatti, Armed Forces Centre for Health Rehabilitation, Saudi Arabia

Submission: August 26, 2017; Published: November 13, 2017

DOI: 10.31031/EPMR.2017.01.000505

ISSN: 2637-7934
Volume1 Issue1

Introduction

Healthcare practitioners have ethical and professional responsibilities to provide the best possible care for every patient [1,2]. To do this, they are required to select effective and safe therapy that addresses the patient's goals of treatment [3]. To be able to select the best evidence for their practice, they need to interpret evidence based practice (EBP) correctly. When the research evidence is conclusive then evidence can be assimilated into patient care as long as the findings relate to that patient [3,4].

However, difficulty arises when research evidence is inconclusive, which is often the case [5]. A common error is to consider that insufficient or inconclusive evidence is evidence of no effect [6]. Interpreting differences that do not reach significance as a finding of no effect is erroneous, as studies may not have adequate sample sizes to determine an effect [7,8]. This was highlighted by Freedman et al. [9] who reviewed 33 randomized controlled trials (RCT) from three major orthopedic peer-reviewed publications. Of the 33 RCTs, 25 reported negative results. In all 25 of these studies, none had sample sizes sufficient to detect a small effect size (0.2 of standard deviation) while 12 studies lacked the power necessary to detect a large effect size (0.8 of standard deviation). Furthermore, in 25 studies, the average sample size used was only 10% of the required number [9].

Interpreting the results of these studies, in this instance, as evidence of no effect is misleading and will leave the impression that the intervention is not effective [8]. Based on such misleading information, an effective intervention may not be used in future [10]. Also, this misleading information might discourage further research on the intervention by giving the impression that the question has been answered [10]. In the case of inconclusive evidence, clinicians should use available research along with clinical experience to inform practice. According to Reinertsen [11]"Given the imperfections and uncertainties of the evidence base, it might be argued that, rather than waiting for perfect evidence, we should make good judgments using the evidence we have, implement those choices together, measure the results, and thereby use our practices to extend our knowledge base of what work and what does not" [11]. In summary, healthcare providers need to apply EBP as it was originally defined. That is, using the best available research evidence in conjunction with their clinical skills and the values of the patient.

However, if there is no conclusive evidence from RCTs, clinicians should use the next best available level of evidence in combination with their clinical experience and patient needs. In addition, healthcare professionals need to be careful that techniques or interventions are not discarded from practice based on a perceived lack of evidence or contradictory evidence. This would assure that patients are provided the best health service based on evidence while ensuring that techniques or modalities that may still be useful are not redundant from practice.

References

  1. Gaag A (2007) Health Professions Council's standards of proficiency. HP Council: Health Professions Council, UK.
  2. APA (1999)Australian Physiotherapy Association Code of Conduct. AP Association: Australian Physiotherapy Association, Australia.
  3. Hush JM, Alison JA (2011) Evidence-based practice: lost in translation? J Physiother 57(3): 143-144.
  4. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS, et al. (1996) Evidence based medicine: what it is and what it isn't. BMJ 312(7023): 71-72.
  5. Nutley S, Walter I, Davies H (2007) Using Evidence. How Can Research Inform Public Services: The Policy Press, Bristol, England, p. 71.
  6. Alderson P, Chalmers I (2003) Survey of claims of no effect in abstracts of Cochrane reviews. BMJ 326: 475.
  7. Oxman AD, Lavis JN, Fretheim A, Lewin S (2009) Support Tools for evidence-informed health Policymaking (STP) 17: Dealing with insufficient research evidence. Health Res Policy Syst 7(1): S17.
  8. Altman DG, Bland JM (1995) Absence of evidence is not evidence of absence. BMJ 311(7003): 485.
  9. Freedman KB, Back S, Bernstein J (2001) Sample size and statistical power of randomised, controlled trials in orthopaedics. J Bone Joint Surg Br 83(3): 397-402.
  10. Alderson P (2004) Absence of evidence is not evidence of absence. BMJ 328(7438): 476-477.
  11. Reinertsen JL (2003) Zen and the art of physician autonomy maintenance. Ann Intern Med 138(12): 992-995.

© 2017 Abulkhair M Beatti. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.