What is Evidence-Based Practice Anyway?
This is an excerpt from Evidence-Based Practice in Athletic Training by Scot Raab & Deborah Craig.
Definition of Evidence-Based Practice
Evidence-based practice (EBP) is a systematic method of reviewing the best evidence, combining it with the art of athletic training or your clinical expertise, and making informed choices to treat the athlete. Integrating EBP requires that you incorporate your clinical expertise, your athletes' values, and the best available clinical research into your practice. At its core, EBP is about focusing on athlete and clinical outcomes.
Clinical expertise is the culmination of your experience treating and providing care to athletes. It includes your personal values, preferences, experiences, and wisdom. Clinical expertise develops from hours of observation and trial and error and can result in a sense of pride and emotion. An AT who is vested in caring for an athlete can be uncomfortable being wrong about an assessment or a rehab and treatment program. Likewise, facing a challenging clinical case, deciding on a course of action, and having that action confirmed by other health care providers can be very rewarding. This is the reason many ATs are wary of incorporating new techniques that are unfamiliar to them. However, such a guarded approach can keep them from remembering that optimal athlete outcomes are the most important goal.
Patient values are the preconceived ideas and beliefs each athlete brings to the clinical setting. Athletes all have their own distinctive concerns that you must address. Often, the athlete arrives with a set feeling of trust in or distrust of ATs that has been cultivated from prior experience or social discussions. Furthermore, the athlete's values can be skewed by the appearance or functional use of the athletic training room and the amount of traffic. Some arrive lacking a clear understanding of the body and the healing process and may have set unrealistic time lines for their return to participation. This will influence their adherence to, or desire to complete, therapy. Treatment can be further confounded by comorbidities and the athlete's support network. Although the challenge is unique for each athlete, as an EBP clinician, you must consider all athletes' perspectives.
Clinical research is a scientific and systematic process that generates evidence through hypothesis testing and sound methodology. This is perhaps the most challenging portion of the EBP concept to incorporate. You may have limited experience analyzing peer-reviewed literature, and learning this process may thus appear daunting. Published studies should at a bare minimum introduce a research question, describe the methods used to assess subjects, outline the variables of interest, and explain the participant inclusion and exclusion criteria. The statistical analysis should align with the methods, and the authors should report only facts in the results section. The discussion or impression of the outcomes belongs in the conclusion or discussion section of an article. When assessing the quality of a clinical research article, ask the following questions:
- Were eligibility criteria specified?
- Were subjects randomly assigned to groups, and was assignment concealed, if appropriate?
- Were subjects similar at baseline?
- As appropriate, were subjects, clinicians, and assessors blinded to the treatment of participants?
- Was the completion rate of participants 85% or higher?
Eligibility criteria are a set of delimiters that detail who can be included in a study. These might include age limits, an association with a specific sport, a particular injury or medical condition, ethnicity, or sex. You need to know whether the study will be relevant to the athlete in your care.
Randomization is important in research because it helps to determine whether something truly made an impact, or whether it might have occurred by chance alone or as a result of a preconceived bias. An unethical researcher might place participants in a study group according to a notion that one group will respond more favorably to the treatment, with the purpose of showing support for that treatment. Likewise, whether participants were blinded (not aware that they were placed in a control or experimental group) is an important consideration when reviewing a study. To help the researcher or support a product or treatment they favor, participants who are not blinded may alter their efforts on tests or skew their reporting.
The similarity of participants is also crucial when determining a study's relevance. Participants should be similar in such factors as age, sport, and condition. It would be inappropriate to conclude that a new treatment works if those in the experimental group receiving the treatment were vastly different from those in the control group who did not receive a treatment. Furthermore, blinding the assessors to the participants' treatment limits the possibility of assessor bias.
Finally, researchers should report the participant completion rate to reveal the percentage of participants who dropped out of the study. A completion rate of less than 85% raises suspicion. Reviewers should question why so many participants dropped out. Was the study poorly organized? Was it painful? Was it too cumbersome or difficult?
With all of these aspects of research to consider, always keep the three components of EBP (clinical expertise, patient values, and clinical research) in mind. The bottom line is that EBP should always revolve around the athlete.
Five Steps of Evidence-Based Practice
To use evidence-based practice and retain an athlete focus, follow these five steps:
- Create aclinically relevant and searchable question concerning your athlete.
- Conduct a database search to find the best evidence.
- Critically appraise the articles or evidence for quality.
- Critically synthesize the evidence to choose and implement a treatment.
- Assess the outcomes by monitoring the athlete.
Let's take a closer look at each of these steps.
- Step 1: The first step is to start with the athlete, establish the clinical concern, and build a clinical question that centers on solving the issue or treating the condition. Further details on developing a clinical research question are presented in chapter 2. In the meantime, remember that the clinical question should be focused and searchable. A question that is too narrow in scope will return limited results. One that is too wide will return excessive information, limiting your ability to incorporate the content into a treatment plan. For example, if you are curious about the ability of ultrasound to facilitate increased tissue extensibility and are treating an athlete with ITB syndrome, searching for treat ITB with ultrasound may result in few or no results. It is too narrow and specific. If you search for ultrasound treatment, the results will most likely be too wide. A more appropriate approach might be to search for ultrasound AND connective tissue. This would return studies related to ultrasound and its effects on connective tissue. This will not be specific to ITB, but because ITB involves connective tissue, the treatment parameters and outcomes may be similar enough to draw upon for clinical use.
- Step 2: This step pertains to conducting the database search and looking for the best evidence related to the athlete's condition or injury. Chapter 2 explains how to conduct the search (including Boolean modifiers and search engines and databases), format the question, and conduct the first search. With practice, you will become adept at completing this in minimal time.
- Step 3: Appraise the articles for quality and for whether they are applicable to your athlete.1 Chapters 3 through 6 will help you learn how to do this. It is helpful to rate the quality and applicability of the studies you find on these two scales. A study may be of high quality but not applicable to your current athlete. For example, consider the effect of counterirritants. A research project that applied counterirritants to athletes with arthritis and found no improvement in range of motion at the fingers would not apply to an athlete's use of counterirritants to treat muscle spasms of the gastrocnemius. In this example, the study might be well designed but have no relevance to an athlete with a tight gastrocnemius. Conversely, a study of the effect of counterirritants on athletes with muscle spasms might be very applicable. However, if the article fails to report subjects' baseline measures, the inclusion or exclusion criteria, or the type of counterirritant used, this would indicate low quality. Without a certain amount of requisite data presented in the study, it would be impossible to devise a treatment plan.
- Step 4: This step involves critical processing and synthesizing. Integrating the evidence, your own clinical expertise and comfort in performing certain skills, and the values of the athlete will form the framework for your treatment plan.
- Step 5: This step returns to the athlete. You need to evaluate critically the progress and outcomes, reflecting on steps 1 through 4 and continually aiming to improve outcomes.
Learn more about Evidence-Based Practice in Athletic Training.
More Excerpts From Evidence Based Practice in Athletic TrainingSHOP
Get the latest insights with regular newsletters, plus periodic product information and special insider offers.
JOIN NOW
Latest Posts
- Stages of learning new motor skills: Bernstein’s model
- Development of the skeletal system during childhood and adolescence
- Characteristics of early overarm throwing
- Execute a perfect pancake takedown to dominate your opponents
- Advocacy, how to best prepare for success, and self-care
- Hydration, sweat loss, and fluid needs