46 Assessment for Effective Intervention 43(1)
Conclusion
The effective implementation of MTSS relies on the use of
evidence-based interventions and methods to monitor stu-
dent progress in response to interventions. Although options
related to evidence-based interventions continue to flourish,
a greater depth of understanding is needed with regard to
what works, for whom, and under what conditions. Central
to these determinations are the use of reliable and valid data
to inform decisions. This study provides additional evi-
dence regarding the sensitivity of DBR-SIS to detect behav-
ior change. Although questions remain regarding how to
best monitor student progress in response to behavioral
interventions, these findings suggest that DBR-SIS offers a
promising approach to formative assessment.
Authors’ Note
Opinions expressed herein do not necessarily reflect the position
of the U.S. Department of Education, and such endorsements
should not be inferred.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect
to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article:
Preparation of this article was supported by funding provided by
the Institute of Education Sciences, U.S. Department of Education
(R324A110017).
References
Briesch, A. M., Chafouleas, S. M., Neugebauer, S. R., & Riley-
Tillman, T. C. (2013). Assessing influences on interven-
tion use: Revision of the Usage Rating Profile-Intervention.
Journal of School Psychology, 51, 81–96. doi:10.1016/j.
jsp.2012.08.006
Briesch, A. M., Chafouleas, S. M., & Riley-Tillman, T. C. (2010).
Generalizability and dependability of behavior assessment
methods to estimate academic engagement: Comparison of
Systematic Direct Observation and Direct Behavior Rating.
School Psychology Review, 39, 408–421.
Chafouleas, S. M. (2011). Direct Behavior Rating: A review of the
issues and research in its development. Education & Treatment
of Children, 34, 575–591. doi:10.1353/etc.2011.0034
Chafouleas, S. M., Briesch, A. M., Riley-Tillman, T. C., Christ, T.
J., Black, A. C., & Kilgus, S. P. (2010). An investigation of the
generalizability and dependability of Direct Behavior Rating
Single Item Scales (DBR-SIS) to measure academic engage-
ment and disruptive behavior of middle school students.
Journal of School Psychology, 48, 219–246. doi:10.1016/j.
jsp.2010.02.001
Chafouleas, S. M., Christ, T. K., Riley-Tillman, T. C., Briesch, A.
M., & Chanese, J. A. M. (2007). Generalizability and depend-
ability of Direct Behavior Rating to assess social behavior of
preschoolers. School Psychology Review, 36, 63–79.
Chafouleas, S. M., McDougal, J. L., Riley-Tillman, T. C., Panahon,
C. J., & Hilt, A. M. (2005). What do daily behavior report
cards (DBRCs) measure? An initial comparison of DBRCs
with direct observation for off-task behavior. Psychology in
the Schools, 42, 669–676.
Chafouleas, S. M., Riley-Tillman, T. C., & Christ, T. J. (2009).
Direct Behavior Rating (DBR): An emerging method for
assessing social behavior within a tiered intervention sys-
tem. Assessment for Effective Intervention, 34, 195–200.
doi:10.1177/1534508409340391
Chafouleas, S. M., Riley-Tillman, T. C., & McDougal, J. L.
(2002). Good, bad, or in-between: How does the daily behav-
ior report card rate? Psychology in the Schools, 39, 157–169.
doi:10.1002/pits.10027
Chafouleas, S. M., Riley-Tillman, T. C., Sassu, K. A., LaFrance,
M. J., & Patwa, S. S. (2007). Daily behavior report cards: An
investigation of the consistency of on-task data across raters
and methods. Journal of Positive Behavior Interventions, 9,
30–37.
Chafouleas, S. M., Sanetti, L. M. H., Kilgus, S. P., & Maggin,
D. M. (2012). Evaluating sensitivity to behavioral change
using Direct Behavior Rating Single-Item Scales. Exceptional
Children, 78, 491–505.
Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C.
H. (2010). Direct Behavior Rating (DBR): Generalizability
and dependability across raters and observations.
Educational and Psychological Measurement, 70, 825–843.
doi:10.1177/0013164410366695
Epstein, J. N., March, J. S., Conners, C. K., & Jackson, D.
L. (1998). Racial differences on the Conners Teacher
Rating Scale. Journal of Abnormal Child Psychology, 26,
109–118.
Fuchs, L. (2004). The past, present, and future of curriculum-
based measurement research. School Psychology Review, 33,
188–192.
Gresham, F. M. (2011). Response to intervention: Conceptual
foundations and evidence based practices. In M. A. Bray & T.
J. Kehle (Eds.), Oxford handbook of school psychology (pp.
523–551). New York, NY: Oxford University Press.
Hintze, J. M., Volpe, R. J., & Shapiro, E. S. (2002). Best practices
in the systematic direct observation of student behavior. In A.
Thomas & J. Grimes (Eds.), Best practices in school psychol-
ogy IV (pp. 993–1006). Bethesda, MD: National Association
of School Psychologists.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R.,
Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010).
Single-case designs technical documentation. What Works
Clearinghouse. Retrieved from https://ies.ed.gov/ncee/wwc/
Docs/ReferenceResources/wwc_scd.pdf
Miller, F. G., Patwa, S. S., & Chafouleas, S. M. (2014). Using
Direct Behavior Rating–Single Item Scales to assess student
behavior within multi-tiered systems of support. Journal of
Special Education Leadership, 27(2), 76–85.
Miller, F. G., Riley-Tillman, T. C., Chafouleas, S. M., & Schardt, A.
A. (2017). Direct Behavior Rating instrumentation: Evaluating
the impact of scale formats. Assessment for Effective
Intervention, 42, 119–126. doi: 10.1177/1534508416658007
Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in
single-case research: A review of nine nonoverlap techniques.
Behavior Modification, 35, 303–322.