DBR Connect

Scale: Academically Engaged

Descriptive Information Usage Acquisition and Cost Program Specifications and Requirements Training

DBR Connect is an online system which uses Direct Behavior Rating – Single Item Scales (DBR-SIS) for screening and progress monitoring purposes.

DBR-SIS are tools that involve brief rating of target behavior(s) following a specified observation period (for example, a class activity such as science lab). DBR-SIS combines features of systematic direct observation (direct observation and recording within a pre-specified period) and rating scales (brief rating of defined behaviors). 

DBR Connect is intended for use for students in kindergarten to twelfth grade.

DBR Connect is intended for use with students in general education, students with disabilities, and English language learners.

DBR Connect measures school-based behavior competencies including:

Academically engaged behavior (AE) - actively or passively participating in the classroom activity.

Disruptive behavior (DB) - student action that interrupts regular school or classroom activity.

DBR-Connect provides information on student behavior in English.

DBR Connect can be purchased through the website: myDBRconnect.com.

Customers submit the online price estimate form and receive an estimate via e-mail. A Customer Support representative then contacts the customer to set up the online account.  

Yearly subscription (based on August 1 – July 31 school year) pricing is tied to the student population of the school(s). Customers can also purchase half-year subscriptions. 

DBR Connect can be rated or scored by a general education teacher, special education teacher, parent, child, external observer, or anyone with consistent access to the student throughout the observation period.

The recommended administration setting is a general education classroom, special education classroom, recess, lunchroom, home, or any setting in which the student is present.

DBR Connect is designed for use with large groups, small groups, individuals or any context in which a student is present.

Administration time per student is less than one minute per occasion to complete rating—observation time can be as short (e.g. 15 minutes) or long (e.g. ½ of day) as needed. Additional scoring time per student is suggested at less than one minute per rating occasion.

Students can be rated concurrently by approximately five administrators.

30 minutes to one hour of training is required for rater/observer.

There are no minimum qualifications for the rater, however it is strongly recommended that all raters complete basic training that involves an overview and demonstration with opportunity to practice prior to rating to enhance rating outcomes.

Training manuals and materials are available and field-tested.

Ongoing technical support is available.

 

Sensitive to Student Change: Convincing Evidence

 

Description of evidence that the monitoring system produces data that are sensitive to detect incremental change (e.g., small behavior change in a short period of time).

Evidence that DBR-SIS can produce data that are sensitive to detect incremental change (e.g., small behavior change in a short period of time) is provided in the 3 studies below.  Actual data are available to demonstrate how DBR-SIS has been used to monitor student performance on a frequent basis to inform decisions about student performance.  The studies below represent a continuum of classwide (middle school, elementary) to individual (elementary) student focus. Graphs are provided in 2 of the 3 (JOBE, AEI) manuscripts to illustrate how data present enough sensitivity to assess change – the third manuscript (Exceptional Children) presents aggregated information in table format only given the volume of data.

 

Chafouleas, S. M., Sanetti, L.M.H., Kilgus, S. P., & Maggin, D. M. (2012). Evaluating sensitivity to behavioral change across consultation cases using Direct Behavior Rating Single-Item Scales (DBR-SIS). Exceptional Children, 78, 491-505.

Abstract.  In this study, the sensitivity of Direct Behavior Rating Single Item Scales (DBR-SIS) for assessing behavior change in response to an intervention was evaluated.  Data from 20 completed behavioral consultation cases involving a diverse sample of elementary participants and contexts utilizing a common intervention in an A-B design were included in analyses.  Secondary purposes of the study were to investigate the utility of five metrics proposed for understanding behavioral response as well as the correspondence among these metrics and teachers’ ratings of intervention acceptability. Overall, results suggest that DBR-SIS demonstrated sensitivity to behavior change regardless of the metric used. Furthermore, there was limited association between student change and teachers’ ratings of acceptability.

 

Chafouleas, S. M., Sanetti, L.M.H., Jaffery, R., & Fallon, L. (2012). Research to practice: An evaluation of a class-wide intervention package involving self-management and a group contingency on behavior of middle school students. Journal of Behavioral Education, 21, 34-57. doi:10.1007/s10864-011-9135-8.

Abstract. The effectiveness of an intervention package involving self-management and a group contingency at increasing appropriate classroom behaviors was evaluated in a sample of middle school students. Participants included all students in each of the 3 eighth-grade general education classrooms and their teachers. The intervention package included strategies recommended as part of best practice in classroom management to involve both building skill (self-management) and reinforcing appropriate behavior (group contingency). Data sources involved assessment of targeted behaviors using Direct Behavior Rating—single item scales completed by students and systematic direct observations completed by external observers. Outcomes suggested that, on average, student behavior moderately improved during intervention as compared to baseline when examining observational data for off-task behavior. Results for Direct Behavior Rating data were not as pronounced across all targets and classrooms in suggesting improvement for students. Limitations and future directions, along with implications for school-based practitioners working in middle school general education settings, are discussed.

 

Riley-Tillman, T.C., Methe, S.A., & Weegar, K. (2009). Examining the use of Direct Behavior Rating methodology on classwide formative assessment: A case study. Assessment for Effective Intervention, 34, 242-250. doi:10.1177/1534508409333879

Abstract. High-quality formative assessment data are critical to the successful application of any problem-solving model (e.g., response to intervention). Formative data available for a wide variety of outcomes (academic, behavior) and targets (individual, class, school) facilitate effective decisions about needed intervention supports and responsiveness to those supports. The purpose of the current case study is to provide preliminary examination of direct behavior rating methods in class-wide assessment of engagement. A class-wide intervention is applied in a single-case design (B-A-B-A), and both systematic direct observation and direct behavior rating are used to evaluate effects. Results indicate that class-wide direct behavior rating data are consistent with systematic direct observation across phases, suggesting that in this case study, direct behavior rating data are sensitive to classroom-level intervention effects. Implications for future research are discussed.

 

Levels of Performance Specified: Partially Convincing Evidence

 

Are levels of performance specified in your manual or published materials?

Yes

Specify the levels of performance:

Although we have newer results related to levels of performance that have not yet been made publicly available, levels of performance can currently be derived from the following manuscript:

Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., & Welsh, M. (in press). Direct Behavior Rating as a school-based behavior screener for elementary and middle grades.  Tentatively accepted in the Journal of School Psychology.

As noted in the manuscript, levels of performance were obtained through use of ROC analyses.  These analyses result in conditional probability indices which can be used to determine an optimal cut score for determining risk.  This cut score serves as the level of performance with which a comparison of individual student can be made.  The above-noted manuscript established cut scores with relatively small confidence intervals. Findings indicated that the established cuts were much more accurate in identifying at-risk students than would be expected from identifying students via chance alone. However, note that the sample was not selected to be representative of a national norm. The following cut scores were decided for various grade groups:

Grade Cut Score
Early Elementary (K-2) 8
Upper Elementary (3-5) 8
Middle School (6-8) 9

This information is presented as preliminary, and with a few important caveats. For example, our subsequent analyses utilizing data collected from a different sample seem to indicate that it may not be appropriate to set uniform cuts across a grade group. In particular, cuts are not consistent across grade levels for upper elementary students, and different cuts may be needed for different portions of the school year (Fall, Winter, Spring).

Describe how the levels of performance are used for progress monitoring:

These cut scores can be used to guide decisions regarding the degree of risk associated with the sub-domain (disruptive behavior, academically engaged).  That is, the closer that a student’s score comes to being within the range of the cut score, the more likely that the student is meeting behavioral expectations.  Thus, these scores can be used as “rough estimates” related to goal setting during progress monitoring.

What is the basis for specifying levels of performance?

Criterion-referenced

If norm-referenced, describe the normative profile:

Not applicable

If criterion-referenced, describe procedures for specifying levels of performance:

See: Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., & Welsh, M. (in press). Direct Behavior Rating as a school-based behavior screener for elementary and middle grades.  Tentatively accepted in the Journal of School Psychology.

Describe any other procedures for specifying levels of performance:

As with all forms of behavioral progress monitoring, intra-individual comparisons for specifying levels of performance are critical, and DBR-SIS lends itself to facilitating specification of intra-individual levels of performance and goal setting.

 

Data to Support Intervention Change: Data Unavailable

 

Are validated decision rules for when changes to the intervention need to be made specified in your manual or published materials?

No

Specify the decision rules here: 

Not applicable

What is the evidentiary basis for these decision rules?

Not applicable

 

Data to Support Intervention Choice: Data Unavailable

 

Are validated desicion rules for what intervention(s) to select specified in your manual or published materials?

No

Specify the decision rules here:

Not applicable

What is the evidentiary basis for these decision rules?

Not applicable

 

Reliability: Convincing Evidence

 

Chafouleas, S.M., Briesch, A.M., Riley-Tillman, T.C., Christ, T.C., Black, A.C., & Kilgus, S.P. (2010). An investigation of the generalizability and dependability of Direct Behavior Rating Single Item Scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students. Journal of School Psychology, 48, 219-246. doi:10.1016/j.jsp.2010.02.001

Subscale(s): Academically Engaged

Form: N/A

Age Range: Middle school (8th grade)

Sample Information: Seven 8th-grade students attending an inclusive language arts classroom. Students’ demographics included: 3 boys/4 girls, 6 Hispanic/1 African-American, 4 receiving special education services. Raters included the classroom teacher, a special education teacher who provides services in the classroom, and two research assistants. In the actual study, raters observed students three times a day over six consecutive days for a period of 45-60 minutes.  Reliability coefficients below present the reliability for raters including classroom teachers and research assistants separately, across a variety of total observations.

Type of Reliability/Rater

Coefficient

# of observations

1

5

10

15

20

Generalizability (relative interpretation)

Classroom Teacher

E(p-hat)2

0.23

0.60

0.75

0.82

0.86

Dependability (absolute interpretation)

Classroom Teacher

Φ

0.17

0.42

0.52

0.57

0.59

Generalizability (relative interpretation)

Research Assistant

E(p-hat)2

0.37

0.74

0.85

0.90

0.92

Dependability (absolute interpretation)

Research Assistant

Φ

0.26

0.60

0.72

0.77

0.80

*Note.  Teachers in this study were not exposed to the complete recommended training components.  Brief introduction/overview only was provided, with no additional feedback.

 

Briesch, A.M., Chafouleas, S.M., & Riley-Tillman, T.C. (2010). Generalizability and dependability of behavior assessment methods to estimate academic engagement: A comparison of systematic direct observation and Direct Behavior Rating. School Psychology Review, 39, 408-421. 

Subscale(s): Academically Engaged

Form: N/A

Age Range: Kindergarten

Sample Information: Twelve kindergarten students attending an inclusive classroom. Students demographics included: 5 boys/7 girls, 1 Asian/1 African-American. Raters included the classroom teacher and a special education teacher who provides services in the classroom. In the actual study, raters observed students three times a day over ten consecutive days for a period of approximately 10 minutes.  Reliability coefficients below present the reliability associated with one rater across a variety of different combinations of total ratings.

Type of Reliability/Rater

Coefficient

1 rating occasion/day over 1 day

1 rating occasion/day over 5 days

1 rating occasion/day over 10 days

3 rating occasions/day over 1 day

3 rating occasions/day over 5 days

3 rating occasions/day over 10 days

Generalizability (relative interpretation)

Classroom Teacher

E(p-hat)2

0.54

0.66

0.68

0.62

0.86

0.69

Dependability (absolute interpretation)

Classroom Teacher

Φ

0.47

0.58

0.61

0.55

0.60

0.62

*Note.  Teachers in this study were not exposed to the complete recommended training components. Overview only was provided, with brief opportunity for questions. 

 

Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., & Welsh, M. (2012). Direct Behavior Rating as a school-based behavior screener for elementary and middle grades.  Tentatively accepted in the Journal of School Psychology.

Subscale(s): Academically Engaged

Form: N/A

Age Range: Elementary and middle school samples

Type of Reliability

Coefficient 

SEM

n

(examinees)

(raters)

Sample Information (including normative data) / Demographics

Score reliability through ICC

0.91

0.42

617 elementary students

44 classroom teachers

Grades K-5

51.7% female

White, Non-Hispanic (N = 553; 89.6%), White, Hispanic (N = 12; 1.9%), Black (N = 9; 1.5%), American Indian or Alaska Native (N = 2; 0.3%), Asian (N = 193 2.1%), Other (N = 8; 1.3%), missing (N = 20 3.2%).

Score reliability through ICC

0.82

0.40

214 middle school students

17 classroom teachers

Grades 6-8

46.3% female

89.7% White, non-Hispanic

*Note.  Teachers in this study completed all recommended components of training prior to data collection.

 

Validity: Convincing Evidence

 

Concurrent validity serves as the primary source of data presented as related to DBR-SIS. As described, the intended purpose of DBR-SIS is in formative uses.  As such, a primary source of validity data comes from concurrent comparisons with variety of behavior observation.  While there is no single behavior assessment method that combines both teacher ratings and formative assessment, comparisons to Systematic Direct Observation (formative behavior assessment) and the Behavioral and Emotional Screening System and Student Risk Screening Scale (teacher ratings) provide information about the validity of DBR-SIS. 

Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., & Welsh, M. (2012). Direct Behavior Rating as a school-based behavior screener for elementary and middle grades.  Tentatively accepted in the Journal of School Psychology.

Subscale: Academically Engaged

Form: N/A

Age Range: Elementary and middle school samples

Type of Validity

Test or Criterion

Coefficient

n

(examinees)

n

(raters)

Sample Information /Demographics

Concurrent (Elementary school)

 

Behavioral and Emotional Screening System (BESS: Kamphaus & Reynolds, 2007)

-0.70

617 elementary students

44 classroom teachers

Grades K-5

51.7% female

White, Non-Hispanic (N = 553; 89.6%), White, Hispanic (N = 12; 1.9%), Black (N = 9; 1.5%), American Indian or Alaska Native (N = 2; 0.3%), Asian (N = 193 2.1%), Other (N = 8; 1.3%), missing (N = 20 3.2%).

Student Risk Screening Scale (SRSS: Drummond, 1994)

-0.64

Concurrent (Middle School)

Behavioral and Emotional Screening System (BESS: Kamphaus & Reynolds, 2007)

-0.55

214 middle school students

17 classroom teachers

Grades 6-8

46.3% female

89.7% White, non-Hispanic

Student Risk Screening Scale (SRSS: Drummond, 1994)

-0.49

 

Riley-Tillman, T.C., Chafouleas, S.M., Sassu, K.A., Chanese, J.A.M., & Glazer, A.D. (2008). Examining the agreement of Direct Behavior Ratings and Systematic Direct Observation for on-task and disruptive behavior. Journal of Positive Behavior Interventions, 10, 136-143. doi:10.1177/1098300707312542

Subscale: On-Task (similary to Academically Engaged)

Form: N/A

Age Range: Elementary and middle school

Type of Validity

Test or Criterion

Coefficient

n

(examinees)

n

(raters)

Sample Information /Demographics

Concurrent

Systematic direct observation – momentary time sampling by researchers

Mean correlation for on-task = 0.81 

(range 0.53-0.87)

15 students

10 elementary teachers, 5 middle school teachers

Not reported – 2 schools in northeastern US

 

Chafouleas, S.M., Kilgus, S.P., & Hernandez, P. (2009). Using Direct Behavior Rating (DBR) to screen for school social risk: A preliminary comparison of methods in a kindergarten sample. Assessment for Effective Intervention, 34, 224-230. doi:10.1177/1534508409333547

Subscale: Academically Engaged

Form: N/A

Age Range: Kindergarten

Type of Validity

Test or Criterion

Coefficient

n

(examinees)

n

(raters)

Sample Information /Demographics

Concurrent –

compared with standard 

score obtained on criterion

Social Skills Rating System –

Teacher From (SSRS: Gresham & Elliott, 1990)

Fall Period:

Ac Comp = 0.53

Social Skills = 0.86

Problem Behavior = -0.88

 

Spring Period:

Ac Comp = 0.36

Social Skills = 0.64

Problem Behavior = 0.65

Fall period: 20 students

Spring period: 18 students

2 classroom teachers

Full day inclusive kindergarten

Ages ranged from 4-7 years

55% girls

90% White

 

Kilgus, S. P., Chafouleas, S. M., Riley-Tillman, T. C., & Welsh, M. E. (2012). Direct Behavior Rating scales as screeners: A preliminary investigation of diagnostic accuracy in elementary school. School Psychology Quarterly, 27, 41-50. doi: 10.1037/a0027150.

Subscale: Academically Engaged

Form: N/A

Age Range: Second grade

Type of Validity

Test or Criterion

Coefficient

n

(examinees)

n

(raters)

Sample Information  /Demographics

Concurrent

Behavioral and Emotional Screening System (BESS: Kamphaus & Reynolds, 2007)

 -0.77

118 second grade students

12 classroom teachers

Second grade classrooms in public schools

52% girls

67% White

 

Social Skills Improvement System – Performance Screening Guide (Elliott & Gresham, 2007)

Motivation to Learn Scale: 0.77

Prosocial Behavior Scale: 0.67

 

Disaggregated Reliability and Validity Data: Data Unavailable

 

 

Assessment Format: Direct Observation , Rating Scale

Rater / Scorer: Teacher, Parent, Child, External Observer

Usability Study Conducted: Yes