Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

Psychology and Psychotherapy: Research Studys

Practical Guidelines to Progress Monitoring

Kristen Byra* PhD, BCBA-D, LBA and Matthew Temple, PhD, BCBA

Upskill ABA Inc.

*Corresponding author: Kristen Byra* PhD, BCBA-D, LBA and Matthew Temple, PhD, BCBA Upskill ABA Inc.

Submission: March 18, 202;Published: March 27, 2024

DOI: 10.31031/PPRS.2024.07.000675

ISSN 2639-0612
Volume7 Issue5

Introduction

According to the Council of Autism Service Providers (CASP) practice guidelines for healthcare funders and managers, one of the outlined core characteristics of Applied Behaviour Analysis (ABA) is that consistent and ongoing review of data is necessary to guide clinical decision making. It is further advised that direct observation data is reviewed at least weekly (CASP, 2020). While frequent data analysis is critical for successful outcomes, it is equally important that when static progress is identified, instructional programming is swiftly modified to lead to learner success.

Progress monitoring is indispensable for assessing treatment outcomes and guiding treatment decisions [1]. Progress monitoring is a method used by practitioners to (a) determine if individuals are benefiting from instructional programs or interventions, (b) identify individuals who are not making meaningful progress, and (c) provide direction for continued evaluation of intervention or instructional programs for individuals not making meaningful progress [2,3]. The purposes of progress monitoring are to (a) assess learning outcomes, (b) evaluate instructional tools and interventions, and (c) determine eligibility for alternative educational services or placements. Continual and frequent progress monitoring ensures data and learner progress oversight that informs crucial decisions about whether instructional practices and interventions should be maintained, modified, or terminated [2]. The two most predominant methods of progress monitoring include mastery measurement, which evaluates an individual’s understanding and proficiency across specific skills, and general outcome measurement (i.e., curriculum-based measurement), which evaluates skill across domains or curriculums [4].

Recently, the quality of services within Applied Behaviour Analysis (ABA) has come under renewed scrutiny [5,6]. This scrutiny is related to poor outcomes and may be related to infrequent or ignored program monitoring of learner outcomes and progress. Progress monitoring offers multiple benefits, including (a) more effective learning interventions and processes, (b) better-informed decisions by practitioners, (c) accountability, and (d) improved learner self-awareness of [7-9]. However, the adverse outcomes of poor-quality or infrequent progress monitoring can include ineffective or inefficient treatment services, inaccurate data, and a lack of significant or meaningful learner progress [10]. We further posit that infrequent or poor-quality progress monitoring can result in the following:
A. Incorrect assumptions related to treatment effectiveness.
B. A misunderstanding of the underlying causes or behavioural functioning.
C. Inaccurate predictions of future behaviour result in misinformed decisions regarding learner treatment and interventions.
D. Learners become demotivated due to marginalized progress.
E. Clinical experiencing compassion fatigue and burnout due to no perceived meaningful outcomes.
F. Potentially subjecting learners to poor instruction resulting in no skill acquisition and, at worst, may establish side biases or faulty stimulus control.

While the threats to patient and practitioner well-being are myriad due to poor progress monitoring, practical interventions, and applications can be implemented to help mitigate or eliminate their influence. Lyons [11] posited four recommendations for ensuring effective program monitoring in schools, which can generalize to ABA settings. The first recommendation involves (a) selecting meaningful leaner targets. Performance monitoring should start with quantifiable and quantitative targets that are socially significant for the learner and the learner’s family. Second, program monitoring should include a (b) review of interventions, instructional outcomes, and treatment planning. Ongoing monitoring to influence and gauge the efficacy of practitionerintroduced interventions and instructions should occur at least monthly, ideally weekly, or biweekly [8]. Third, practitioners should (c) provide ongoing feedback to the learner. We also add the importance of providing ongoing feedback to the learner’s family and Registered Behavior Technicians (RBT). Fourth, providing and reviewing (d) visual and graphical feedback can help identify patterns and trends that inform critical decisions.

The success of a treatment plan is predicated on the assumption that appropriate goals have been selected for the learner. The appropriateness of goal selection is defined as (a) goals that are linked to the assessment results, (b) developmentally appropriate, (c) functional, and (d) that positively impact the learner’s (and family’s) quality of life. When goals are inappropriately selected, learning will be challenging and impractical, especially if the learner needs to possess the prerequisite skills to be successful. Therefore, careful consideration must be used in the goal selection process to attenuate maladaptive behaviour, to help ensure motivation is not decreased by traditional teaching methodologies, and to mitigate poor treatment fidelity [KB1].

Because of potential adverse outcomes associated with maladaptive behaviour, decreased motivation, and poor treatment fidelity, clinicians must commit to systematic, objective, and consistent progress data analysis. As we further discuss various benefits and applications of practitioner progress monitoring, the term data analysis must be contextualized due to its multiple colloquial interpretations. For this research, data analysis will be discussed within the context of the visual analysis framework. The literature has demonstrated that decisions about progress based on visual analysis can be adjudicated [KB2] after ten sessions [12,13]. While progress monitoring may seem premature at only ten sessions, timely analysis is required for effective and efficient learning. Additionally, given that the acquisition rate varies among learners [13], slower learning rates does not necessarily mean treatment was ineffective; instead, they might have benefited from earlier programming adjustments [14].

Decision points include whether (a) the goal is projected to be mastered within the next month, (b) if adequate progress is occurring, (c) if inadequate progress is evident, (d) if there is varied progress, or (e) no progress [12,13]. Furthermore, given the perceived trajectory, decisions can be made regarding goal continuation, goal modification, or goal cessation. A decision tree of how to address static progress is outlined by Ferraioli & colleagues [15] and serves as a practical resource for how clinicians can troubleshoot problematic programming.

Questions for self-reflection include (a) is the learner making meaningful progress? (b) how do I, as a practitioner, know? Moreover, (c) Is the progress enough to justify the continuation of current programming and interventions? The third question may be the most difficult to answer because the field of applied behaviour analysis has yet to have unified or established standards of care regarding outcomes. The issue of not having unified standards of care is further complicated when one considers universal expectations and interpretations related to numeric goals mastered (i.e., acquisition rate) and scored curriculum or norm-referenced assessments. Fortuitously, some guidelines help inform practitioners of more universally accepted acquisition rates and recommendations that, when slower rates of acquisition occur, indicate that modifications are needed [14]. Using practical, evidenced-based guides that help instruct and inform practitioners about when intervention or program alternations are needed will only help to strengthen the perceived necessity of frequent program monitoring.

Despite the necessity of early, frequent, and consistent monitoring, evaluation, and potential programming modification, it is also suggested that clinicians take responsibility for the short- and long-term effects of the learner’s trajectory. According to Ala’i-Rosales et al. [16] clinicians intervene during a learner’s most critical developmental period, and their subsequent actions will have long-lasting resonations for both learner and the family. Because of this crucial relationship between clinician and learner, clinicians should evaluate and compare the learner’s yearly progress against reported outcomes in the published research and practice routine monitoring. It is also advisable for clinicians to periodically invite impartial review(s) of the learner’s progress.

In addition to Borntrager and Lyons [11] recommendations for program monitoring, we offer four common-sense strategies to help ensure the continuation of timely progress monitoring:
A. Practitioners should develop an orchestrated system to frequently analyze learner progress and data, including objectives, skills, and goals. We suggest a weekly or biweekly review of all target data trends and graphs [8]. Weekly or biweekly progress monitoring allows the behavior analyst to identify changes in the learner’s behavior and determine if the intervention is effective.
B. Practitioners should set clear expectations of outcomes and regularly communicate the learner’s progress with all stakeholders, including the learner (if applicable), technicians, family members, co-treating professionals, and insurance companies.
C. Practitioners should set reasonable timelines for achieving specific objectives and learner outcomes, including biweekly monitoring.
D. Practitioners should leverage weekly or biweekly progress monitoring to adjust interventions should the data indicate undesired effects.

Progress monitoring should be at the forefront of BCBA responsibilities as a practitioner. A learner not making progress should be alarming and evoke restitution behaviours to attempt to resolve the disconnect and create a path for course correction. Behaviour analysts should act quickly to thwart the faulty premise that simply the passage of time will improve learner performance [17,18]. As practitioners, we must have a mindset that rapid skill acquisition is the anticipated result for a learner, rather than a bestcase scenario only achieved by some. There needs to be a sense of accountability from a practitioner’s role in a learner’s development during active treatment and how interventions will impact the future for the learner and the learner’s family.

References

  1. Shepley C, Lane JD, Graley D (2022) Progress monitoring data for learners with disabilities: Professional perceptions and visual analysis of effects. Remedial and Special Education 1: 1-11.
  2. Brown R (2021) What is progress monitoring? Illuminate Education.
  3. Dexter DD, Hughes C (n.d.). Progress monitoring within a respond-to-intervention model. RTI Action Network.
  4. Iris Cnter (nd) Progress monitoring: Mastery measurement vs. general outcome measurement. Vanderbilt Peabody College, USA.
  5. Silbaugh BC, El Fattal R (2022) Exploring quality in the applied behaviour analysis service delivery industry. Behaviour Analysis in Practice 15(2): 571-590.
  6. Sohn E (2020) Low standards erode quality of popular autism therapy.
  7. Fuchs D, Fuchs LS (2005) Responsiveness-to-intervention: A blueprint for practitioners, policymakers, and parents. Teaching Exceptional Children 38(1): 57-61.
  8. Fuchs D, Fuchs LS (2006) Introduction to responsiveness-to-intervention: What, why, and how valid is it? Reading Research Quarterly 41(1): 93-99.
  9. Safer N, Fleischman S (2005) Research matters: How student progress monitoring improves instruction. Educational Leadership (ascd) 62(5).
  10. Peschon D (2020) Feedback and the effects on progress monitoring. Northwestern College: Master’s Theses & Capstone, USA.
  11. Borntrager C, Lyon AR (2015) Client progress monitoring and feedback in school-based mental health. Cognitive and Behavioral Practice 22(1): 74-86.
  12. Kipfmiller KJ, Brodhead MT, Wolfe K, LaLonde K, Sipila ES, et al. (2019) Training front-line employees to conduct visual analysis using a clinical decision-making model. Journal of Behavioral Education 28: 301-322.
  13. Wolfe K, McCammon MN, LeJeune LM, Holt AK (2023) Training preservice practitioners to make data-based instructional decisions. Journal of Behavioural Education 32(1): 1-20.
  14. Weiss MJ (1999) Differential rates of skill acquisition and outcomes of early intensive behavioral intervention for autism. Behavioral Interventions: Theory & Practice in Residential & Community‐Based Clinical Programs 14(1): 3-22.
  15. Ferraioli S, Hughes C, Smith T (2005) A model for problem solving in discrete trial training for children with autism. Journal of Early and Intensive Behavior Intervention 2(4): 224.
  16. Ala Rosales S, Zeug N (2010) Three important things to consider when starting intervention for a child diagnosed with autism. Behavior Analysis in Practice 3(2): 54.
  17. Weiss MJ, Delmolino L (2006) The relationship between early learning rates and treatment outcome for children with autism receiving intensive home-based applied behavior analysis. The Behavior Analyst Today 7(1): 96-110.
  18. The Council of Autism Service Providers, Inc. ®(CASP®) (2020) Applied behaviour analysis treatment of autism spectrum disorder: practice guidelines for healthcare funders and managers.

© 2024 Kristen Byra, This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.