Kristen Byra* PhD, BCBA-D, LBA and Matthew Temple, PhD, BCBA
Upskill ABA Inc.
*Corresponding author: Kristen Byra* PhD, BCBA-D, LBA and Matthew Temple, PhD, BCBA Upskill ABA Inc.
Submission: March 18, 202;Published: March 27, 2024
ISSN 2639-0612Volume7 Issue5
According to the Council of Autism Service Providers (CASP) practice guidelines for healthcare funders and managers, one of the outlined core characteristics of Applied Behaviour Analysis (ABA) is that consistent and ongoing review of data is necessary to guide clinical decision making. It is further advised that direct observation data is reviewed at least weekly (CASP, 2020). While frequent data analysis is critical for successful outcomes, it is equally important that when static progress is identified, instructional programming is swiftly modified to lead to learner success.
Progress monitoring is indispensable for assessing treatment outcomes and guiding treatment decisions [1]. Progress monitoring is a method used by practitioners to (a) determine if individuals are benefiting from instructional programs or interventions, (b) identify individuals who are not making meaningful progress, and (c) provide direction for continued evaluation of intervention or instructional programs for individuals not making meaningful progress [2,3]. The purposes of progress monitoring are to (a) assess learning outcomes, (b) evaluate instructional tools and interventions, and (c) determine eligibility for alternative educational services or placements. Continual and frequent progress monitoring ensures data and learner progress oversight that informs crucial decisions about whether instructional practices and interventions should be maintained, modified, or terminated [2]. The two most predominant methods of progress monitoring include mastery measurement, which evaluates an individual’s understanding and proficiency across specific skills, and general outcome measurement (i.e., curriculum-based measurement), which evaluates skill across domains or curriculums [4].
Recently, the quality of services within Applied Behaviour Analysis (ABA) has come
under renewed scrutiny [5,6]. This scrutiny is related to poor outcomes and may be related
to infrequent or ignored program monitoring of learner outcomes and progress. Progress
monitoring offers multiple benefits, including (a) more effective learning interventions and
processes, (b) better-informed decisions by practitioners, (c) accountability, and (d) improved
learner self-awareness of [7-9]. However, the adverse outcomes of poor-quality or infrequent
progress monitoring can include ineffective or inefficient treatment services, inaccurate data,
and a lack of significant or meaningful learner progress [10]. We further posit that infrequent
or poor-quality progress monitoring can result in the following:
A. Incorrect assumptions related to treatment effectiveness.
B. A misunderstanding of the underlying causes or behavioural functioning.
C. Inaccurate predictions of future behaviour result in misinformed decisions regarding
learner treatment and interventions.
D. Learners become demotivated due to marginalized progress.
E. Clinical experiencing compassion fatigue and burnout due to no perceived
meaningful outcomes.
F. Potentially subjecting learners to poor instruction
resulting in no skill acquisition and, at worst, may establish side
biases or faulty stimulus control.
While the threats to patient and practitioner well-being are myriad due to poor progress monitoring, practical interventions, and applications can be implemented to help mitigate or eliminate their influence. Lyons [11] posited four recommendations for ensuring effective program monitoring in schools, which can generalize to ABA settings. The first recommendation involves (a) selecting meaningful leaner targets. Performance monitoring should start with quantifiable and quantitative targets that are socially significant for the learner and the learner’s family. Second, program monitoring should include a (b) review of interventions, instructional outcomes, and treatment planning. Ongoing monitoring to influence and gauge the efficacy of practitionerintroduced interventions and instructions should occur at least monthly, ideally weekly, or biweekly [8]. Third, practitioners should (c) provide ongoing feedback to the learner. We also add the importance of providing ongoing feedback to the learner’s family and Registered Behavior Technicians (RBT). Fourth, providing and reviewing (d) visual and graphical feedback can help identify patterns and trends that inform critical decisions.
The success of a treatment plan is predicated on the assumption that appropriate goals have been selected for the learner. The appropriateness of goal selection is defined as (a) goals that are linked to the assessment results, (b) developmentally appropriate, (c) functional, and (d) that positively impact the learner’s (and family’s) quality of life. When goals are inappropriately selected, learning will be challenging and impractical, especially if the learner needs to possess the prerequisite skills to be successful. Therefore, careful consideration must be used in the goal selection process to attenuate maladaptive behaviour, to help ensure motivation is not decreased by traditional teaching methodologies, and to mitigate poor treatment fidelity [KB1].
Because of potential adverse outcomes associated with maladaptive behaviour, decreased motivation, and poor treatment fidelity, clinicians must commit to systematic, objective, and consistent progress data analysis. As we further discuss various benefits and applications of practitioner progress monitoring, the term data analysis must be contextualized due to its multiple colloquial interpretations. For this research, data analysis will be discussed within the context of the visual analysis framework. The literature has demonstrated that decisions about progress based on visual analysis can be adjudicated [KB2] after ten sessions [12,13]. While progress monitoring may seem premature at only ten sessions, timely analysis is required for effective and efficient learning. Additionally, given that the acquisition rate varies among learners [13], slower learning rates does not necessarily mean treatment was ineffective; instead, they might have benefited from earlier programming adjustments [14].
Decision points include whether (a) the goal is projected to be mastered within the next month, (b) if adequate progress is occurring, (c) if inadequate progress is evident, (d) if there is varied progress, or (e) no progress [12,13]. Furthermore, given the perceived trajectory, decisions can be made regarding goal continuation, goal modification, or goal cessation. A decision tree of how to address static progress is outlined by Ferraioli & colleagues [15] and serves as a practical resource for how clinicians can troubleshoot problematic programming.
Questions for self-reflection include (a) is the learner making meaningful progress? (b) how do I, as a practitioner, know? Moreover, (c) Is the progress enough to justify the continuation of current programming and interventions? The third question may be the most difficult to answer because the field of applied behaviour analysis has yet to have unified or established standards of care regarding outcomes. The issue of not having unified standards of care is further complicated when one considers universal expectations and interpretations related to numeric goals mastered (i.e., acquisition rate) and scored curriculum or norm-referenced assessments. Fortuitously, some guidelines help inform practitioners of more universally accepted acquisition rates and recommendations that, when slower rates of acquisition occur, indicate that modifications are needed [14]. Using practical, evidenced-based guides that help instruct and inform practitioners about when intervention or program alternations are needed will only help to strengthen the perceived necessity of frequent program monitoring.
Despite the necessity of early, frequent, and consistent monitoring, evaluation, and potential programming modification, it is also suggested that clinicians take responsibility for the short- and long-term effects of the learner’s trajectory. According to Ala’i-Rosales et al. [16] clinicians intervene during a learner’s most critical developmental period, and their subsequent actions will have long-lasting resonations for both learner and the family. Because of this crucial relationship between clinician and learner, clinicians should evaluate and compare the learner’s yearly progress against reported outcomes in the published research and practice routine monitoring. It is also advisable for clinicians to periodically invite impartial review(s) of the learner’s progress.
In addition to Borntrager and Lyons [11] recommendations
for program monitoring, we offer four common-sense strategies to
help ensure the continuation of timely progress monitoring:
A. Practitioners should develop an orchestrated system to
frequently analyze learner progress and data, including objectives,
skills, and goals. We suggest a weekly or biweekly review of all
target data trends and graphs [8]. Weekly or biweekly progress
monitoring allows the behavior analyst to identify changes in the
learner’s behavior and determine if the intervention is effective.
B. Practitioners should set clear expectations of outcomes
and regularly communicate the learner’s progress with all
stakeholders, including the learner (if applicable), technicians,
family members, co-treating professionals, and insurance
companies.
C. Practitioners should set reasonable timelines for
achieving specific objectives and learner outcomes, including
biweekly monitoring.
D. Practitioners should leverage weekly or biweekly
progress monitoring to adjust interventions should the data
indicate undesired effects.
Progress monitoring should be at the forefront of BCBA responsibilities as a practitioner. A learner not making progress should be alarming and evoke restitution behaviours to attempt to resolve the disconnect and create a path for course correction. Behaviour analysts should act quickly to thwart the faulty premise that simply the passage of time will improve learner performance [17,18]. As practitioners, we must have a mindset that rapid skill acquisition is the anticipated result for a learner, rather than a bestcase scenario only achieved by some. There needs to be a sense of accountability from a practitioner’s role in a learner’s development during active treatment and how interventions will impact the future for the learner and the learner’s family.
© 2024 Kristen Byra, This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.