Neurorehabilitation Scientist
Teaching the next generation of physiotherapists
Wednesday, October 3, 2012
PNF and Neurological Rehabilitation
The introductory course covers (somewhat broadly) the various approaches to neurological rehabilitation. One of those is PNF (proprioceptive neuromuscular facilitation). I am aware of my personal biases, having trained as physiotherapist in Australia under a strong motor relearning framework; but I have a hard time understanding the rationale for PNF as a neurological rehabilitation approach. I was taught that PNF is a stretching technique. A quick scan of recent studies on PNF suggests it is widely used for this purpose.
My colleagues who were teaching the PNF material to the students last week raved about PNF and how effective it is to treat neurological problems, including rigidity. If this anecdotal evidence, which made PNF sound like magic, is true, then why does the research evidence tell a different story? Without my own anecdotal evidence of the “magic of PNF,” I rely on the reported empirical evidence, so I am skeptical about PNF for neuro rehab. I would have thought that in the current age of evidence-based physical therapy, my clinical colleagues would be more on board with the research evidence. I can understand wanting the students to have the skill set in their tool box, but I am concerned that the (anecdotal) evidence presented was misleading, and that the students will expect magical (and possibly unrealistic) results from PNF.
I appeared to be an island among my colleagues in my views on the topic last week. I wonder how other neuro physiotherapists (clinicians or scientists) view PNF as a technique for patients with neurological disorders…
Tuesday, September 4, 2012
Clinical significance versus statistical significance
The ability to appraise research evidence for its validity and determine its applicability to a particular patient case is a critical skill for evidence-based physical therapy practitioners. While the research evidence comprises only one component of evidence-based practice (EBP), it is an important component. (The other key components are the therapist’s clinical expertise and the patient’s individual circumstances, values and preferences.)
In evaluating evidence from research studies, my students are often confused by the difference between statistical and clinical significance, or they hang their hat on the findings being statistically significant (or not). Both are important and need to be considered when making decisions about the value of research evidence.
In research studies comparing the effects of two interventions on a particular outcome, the difference between treatment groups at the end of the intervention is usually determined by a statistical test. The specific statistical test will depend on the type of data, the study design, and the research hypothesis being tested. The statistical test indicates whether Treatment A has a different effect than Treatment B, but it does not reveal any information about the size of the treatment effect. The effect size is vital information in determining whether the resulting outcome is worth caring about from a clinical standpoint.
Lack of statistical significance does not necessarily indicate lack of clinical significance
To illustrate this, let’s discuss difference between p-value and effect size.
p-values and statistical significance
The p-value is the probability that the difference/result occurred due to chance. Therefore, if you are trying to demonstrate a difference between two interventions you want this value to be small – that is, NOT likely that the difference between the groups was due to chance. Typically, the threshold for statistical significance is set at 0.05. This means that if the probability that the difference between the groups was due to chance is greater than 5%, the difference is not considered to be statistically significant.
Statistical significance is important; you want to know that the difference is unlikely to be a chance result. But this is not enough. Statistical significance is not sufficient by itself to determine the importance of the findings. The difference between groups might not be due to chance, but it might not be clinically important. Therefore, you also want to consider the effect size, to determine if the results are clinically meaningful. This is what it means for something to be clinically significant – the observed difference between the groups is a difference worth caring about; it is scientifically or clinically important.
Effect sizes and clinical significance
It is possible to have a scenario in which the results of a study are statistically significant, but the effect size is small or not clinically important. Conversely, the results of a study may fail to find statistically significant differences between two interventions, but the effect size may be large and of clinical significance. (In the latter, it is worth considering if the study was underpowered to detect the desired effect size.)
An effect size measures the magnitude of the difference between two groups. It may be presented as the absolute effect size – the difference in mean outcome value for Treatment A and mean outcome value for Treatment B at the conclusion of the study. (Absolute effect sizes are easy to interpret if you know what change in the unit of measurement is meaningful.) Standardized versions of the effect size are calculated when the variation in the scores is included in the calculation. Standardized effect sizes are useful when you want to compare the magnitude of impact of the same intervention across different outcome measures in the study. (A common example of a standardized effect size index is Cohen’s d, which is the effect size between two means.)
Confidence intervals and clinical significance
Confidence intervals are also helpful in determining potential meaningfulness because they establish the precision (or lack thereof) of effect sizes. The confidence interval is defined as a range of scores within which the true score for a variable is estimated to lie, within a specified probability (usually 95%, but can be 90% or 99%). The narrower the confidence interval, the more precise the estimate of the effect size.
So, statistical significance and clinical significance are both important, but neither is sufficient by itself. You need to have both! You want to know that the results are not due to chance and that they are clinically important. Therefore, as consumers of the medical literature, it is important to look at both the p-value AND the effect size! Critically evaluate both before making decisions about the evidence for use in clinical practice.
Monday, August 20, 2012
The language of rehabilitation: confusion over intensity, duration, and frequency
Motivated by inconsistencies in the literature, the subcommittee discusses and proposes operational definitions for: intensity, duration, and frequency. The goal is to facilitate communication across disciplines and across research and clinical domains.
I applaud the subcommittee’s endeavor to unite the field of stroke rehabilitation and agree that developing consistent nomenclature may facilitate interdisciplinary communication. However, there are some potential pitfalls with the proposed definitions.
Let’s start with the big one, the most confusing term: Intensity.
The Merriam-Webster online dictionary (medical) defines intensity in 2 ways:
1. The quality or state of being intense; especially: extreme degree of strength, force, energy, or feelingThe ACRM Stroke Movement Interventions Subcommittee recommends the following definition for intensity, within the scope of stroke rehabilitation:
2. The magnitude of a quantity (as force or energy) per unit (as of surface, charge, mass, or time)
“The amount of physical activity or mental work put forth by the client during a particular movement or series of movements, exercise, or activity during a defined period of time” (page 1396)
This definition encapsulates both aspects of the lay definition – the feeling of intense effort (“mental work”), and the magnitude of quantity (“amount of physical activity”) per unit (“time”).
However, mental effort is a subjective phenomenon, and is not only difficult to measure, but near impossible to prescribe. As a physiotherapy researcher, I prescribe the treatment dose to be tested in an experimental intervention. I can’t imagine prescribing a standardized level of mental effort to be achieved in each session; in contrast, a specific number of repetitions within a pre-determined period of time can be operationalized and controlled. The problem with the definition proposed by the ACRM subcommittee is that mental effort may differ for the same amount of physical activity between two patients. If you define intensity by amount of physical activity, and two patients performed the same number of repetitions in a defined period of time, it is easy to say the patients had the same intensity of treatment. However, if you define intensity by the “mental work put forth by the client” and the reported mental effort exerted by two patients to complete the prescribed amount of physical activity is different, then the intensities for the two patients were not the same. Thus, the two versions of the subcommittee’s definition of intensity can lead to contrasting classifications of intensity. As illustrated by this example, the recommended definition is unlikely to help achieve the subcommittee’s goal of streamlining interdisciplinary communication.
Since mental effort is subjective, and is likely influenced by individual characteristics (such as comfort threshold or even personality traits), it would seem that intensity may be more concretely and consistently defined solely by the amount of physical activity, expressed in terms of number of repetitions (and sets) in a fixed period of time.
There is also an issue with the subcommittee’s proposed definition of “duration.” The problem is that the subcommittee, again, supports two different definitions of “duration” of treatment – the duration of a single session, and the duration of an intervention period. The first definition – duration of a single session – is part of intensity. Specifically, intensity, by the subcommittee’s own definition, is defined as amount of physical activity or mental work within a “defined period of time.” That is, the duration of the session. If one appropriately specifies the treatment intensity (i.e., includes the defined period of time for the session), then the logical definition for treatment duration is the length of time over which the intervention is provided. This information is critical for consumers of research literature.
Frequency refers to how often the intervention is administered. There is little to debate over this term.
So while I applaud the effort of the subcommittee to “optimize terminology for stroke motor rehabilitation” and support the need for a common language, I fear the recommended definitions are too imprecise to achieve this goal, and may contribute to further confusion.