DIBELS Next

DAZE

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

Amplify: The basic pricing plan is an annual per student license of $14.90. For users already using an mCLASS assessment product, the cost per student to add mCLASS:DIBELS Next is $6 per student. 

Sopris: There are three purchasing options for implementing Progress Monitoring materials in Year 1:

1) Progress Monitoring via Online Test Administration and Scoring

2) Progress Monitoring materials as part of the purchase of Classroom Sets, which also include Benchmark materials and DIBELS Next Survey

3) Individual Progress Monitoring materials

DIBELS Next Classroom Sets contain everything needed for one person to conduct the Benchmark Assessment for 25 students and the Progress Monitoring Assessment for up to five students. These easy-to-implement kits simplify the distribution and organization of DIBELS Next materials.

DMG: Materials may be downloaded at no cost from DMG at http://dibels.org/next. Minimal reproduction costs associated with printing.

Testers will require 4-8 hours of training. Examiners must at a minimum be a paraprofessional.

Training manuals and materials are field tested and are included in the cost of the tool.

Amplify’s Customer Care Center offers complete user-level support from 7:00 a.m. to 7:00 p.m. EST, Monday through Friday. Customers may contact a customer support representative via telephone, e-mail, or electronically through the mCLASS website. Additionally, customers have self-service access to instructions, documents, and frequently asked questions on our Website. The research staff and product teams are available to answer questions about the content within the assessments.

Accommodations:

DIBELS Next is an assessment instrument well-suited for use with capturing the developing reading skills of special education students learning to read, with a few exceptions: a) students who are deaf; b) students who have fluency-based speech disabilities, e.g., stuttering, oral apraxia; c) students who are learning to read in a language other than English or Spanish; d) students with severe disabilities. Use of DIBELS Next is appropriate for all other students, including those in special education for whom reading connected text is an IEP goal. For students receiving special education, it may be necessary to adjust goals and timelines. Approved accommodations are available in the administration manual.

Where to obtain:

Amplify Education, Inc.
55 Washington Street, Suite 900
Brooklyn, NY 11201
1-800-823-1969, option 1

www.amplify.com

Sopris Learning.
17855 Dallas Parkway, Suite 400, Dallas, TX 75287-6816
http://www.soprislearning.com

DMG

859 Willamette Street, Suite 320, Eugene, OR 97401
541-431-6931
(888) 399-1995

http://dibels.org

DIBELS Next measures are brief, powerful indicators of foundational early literacy skills that: are quick to administer and score; serve as universal screening (or benchmark assessment) and progress monitoring; identify students in need of intervention support; evaluate the effectiveness of interventions; and support the RtI/Multi-tiered model. DIBELS Next comprises six measures: First Sound Fluency (FSF), Letter Naming Fluency (LNF), Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF), DIBELS Oral Reading Fluency (DORF), and Daze. 

Daze is the standardized DIBELS version of maze procedures for measuring reading comprehension and is intended for use in grades three to six. The purpose of a maze procedure is to measure the reasoning processes that constitute comprehension. Specifically, Daze assesses the student’s ability to construct meaning from text using word recognition skills, background information and prior knowledge, familiarity with linguistic properties such as syntax and morphology, and reasoning skills.

Administration of the test takes 3 minutes and can be administered in a group setting.

There are 20 alternate forms per measure.

Raw scores and developmental benchmarks are available. Raw scores are provided as the reading level of the student. Cut Points for each proficiency level are provided. Developmental benchmarks for each measure, grade, and time of year (beginning, middle, end) report each score as Above Proficient, Proficient, Below Proficient, or Far Below Proficient. Raw scores, cut points, and benchmark goals are all grade-specific but are not strictly based on grade norms.

 

Reliability of the Performance Level Score

Grade3456
RatingFull bubbleFull bubbleFull bubbleFull bubble

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Alternate-Form

3

40

 

0.83

3.91

Participants were students from five schools in one district.

Alternate-Form

4

40

 

0.75

4.00

Alternate-Form

5

61

 

0.83

4.68

Alternate-Form

6

60

 

0.79

2.95

Inter-rater

3

25

 

0.99

 

Participants were students randomly selected from five schools. 

Inter-rater

4

25

 

0.98

 

Inter-rater

5

26

 

0.99

 

Inter-rater

6

20

 

0.99

 

 

 

Reliability of the Slope

Grade3456
RatingFull bubbleFull bubbleEmpty bubbleEmpty bubble

Type of
Reliability

Age or
Grade

n
(range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

HLM

3

1562

 

0.62

 

Reliability of slope was computed using data from school year 2011-2012. 19% African American, 15% Hispanic, 6% Asian or Pacific Islander, 9% Multi-race; 12% subsidized lunch; 14% special education; 14% English as second language. Weekly assessments over 12 months (i.e., 6-31 assessments; mean=13.00).

HLM

4

471

 

0.61

 

Reliability of slope was computed using data from school year 2011-2012. 13% African American, 12% Hispanic, 9% Asian or Pacific Islander, 7% Multi-race; 28% subsidized lunch; 3% special education; 12% English as second language. Weekly assessments over 12 months (i.e., 6-25 assessments; mean=9.24).

HLM

5

396

 

0.42

 

Reliability of slope was computed using data from school year 2011-2012. 18% African American, 14% Hispanic, 6% Asian or Pacific Islander, 6% Multi-race; 15% subsidized lunch; 7% special education; 7% English as second language. Weekly assessments over 12 months (i.e., 6-26 assessments; mean=9.58).

HLM

6

570

 

0.35

 

Reliability of slope was computed using data from school year 2011-2012. 2% African American, 7% Hispanic; 16% subsidized lunch; 8% special education; 4% English as second language. Weekly assessments over 12 months (i.e., 6-28 assessments; mean=10.58).

 

Validity of the Performance Level Score

Grade3456
RatingFull bubbleFull bubbleFull bubbleFull bubble

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Concurrent Validity

3

ISTEP+ ELA

3814

 

0.71

Validity was computed using data from school year 2011-2012. 14% African American, 12% Hispanic, 4% Asian, 7% Multi-race; 16% subsidized lunch; 8% special education; 10% English as second language.

Predictive

3

GRADE Total Test

184

 

0.67

Participants included students in third through sixth grade from thirteen schools across five states.

Predictive

4

GRADE Total Test

184

 

0.68

Predictive

5

GRADE Total Test

194

 

0.61

Predictive

6

GRADE Total Test

103

 

0.61

Concurrent

3

GRADE Total Test

184

 

0.67

Concurrent

4

GRADE Total Test

184

 

0.68

Concurrent

5

GRADE Total Test

194

 

0.66

Concurrent

6

GRADE Total Test

103

 

0.64

 

Predictive Validity of the Slope of Improvement

Grade3456
RatingEmpty bubbledashdashdash

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data)/Subjects

range

median

Concurrent Validity

3

ISTEP+ ELA

1272

 

0.36

Validity of slope was computed using data from school year 2011-2012. 19% African American, 15% Hispanic, 7% Asian, 8% Multi-race; 15% subsidized lunch; 9% special education; 13% English as second language. Weekly assessments over 12 months (i.e., 6-30 assessments; mean=10.72).

 

Bias Analysis Conducted

Grade3456
RatingNoNoNoNo

Disaggregated Reliability and Validity Data

Grade3456
RatingYesYesYesYes

Disaggregated  Reliability of the Slope

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

HLM (Caucasian)

3

486

 

0.61

 

Reliability of slope was computed using data from school year 2011-2012. 12% subsidized lunch; 14% special education; 14% English as second language. Weekly assessments over 12 months (i.e., 6-31 assessments; mean=13.00).

HLM (African American)

3

289

 

0.61

 

HLM (Hispanic)

3

232

 

0.66

 

HLM (Caucasian)

4

243

 

0.59

 

Reliability of slope was computed using data from school year 2011-2012. 28% subsidized lunch; 3% special education; 12% English as second language. Weekly assessments over 12 months (i.e., 6-25 assessments; mean=9.24).

HLM (African American)

4

62

 

0.68

 

HLM (Hispanic)

4

53

 

0.56

 

HLM (Caucasian)

5

201

 

0.42

 

Reliability of slope was computed using data from school year 2011-2012. 15% subsidized lunch; 7% special education; 7% English as second language. Weekly assessments over 12 months (i.e., 6-26 assessments; mean=9.58).

HLM (African American)

5

72

 

0.48

 

HLM (Hispanic)

5

52

 

0.49

 

HLM (Caucasian)

6

224

 

0.32

 

Reliability of slope was computed using data from school year 2011-2012. 2% African American, 7% Hispanic; 16% subsidized lunch; 8% special education; 4% English as second language. Weekly assessments over 12 months (i.e., 6-28 assessments; mean=10.58).

HLM (African American)

6

9

 

0.49

 

HLM (Hispanic)

6

36

 

0.15

 

 

Disaggregated Validy of the Performance Level Score

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Concurrent Validity (Caucasian)

3

ISTEP+ ELA

2028

 

0.70

Validity was computed using data from school year 2011-2012. 16% subsidized lunch; 8% special education; 10% English as second language.

Concurrent Validity (African American)

3

ISTEP+ ELA

547

 

0.67

Concurrent Validity (Hispanic)

3

ISTEP+ ELA

451

 

0.67

 

 

Disaggregated Predictive Validy of the Slope of Improvement

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data)/Subjects

range

median

Concurrent Validity (Caucasian)

3

ISTEP+ ELA

653

 

0.39

Validity of slope was computed using data from school year 2011-2012. 15% subsidized lunch; 9% special education; 13% English as second language. Weekly assessments over 12 months (i.e., 6-30 assessments; mean=10.72).

Concurrent Validity  (African American)

3

ISTEP+ ELA

237

 

0.31

Concurrent Validity  (Hispanic)

3

ISTEP+ ELA

185

 

0.37

 

Alternate Forms

Grade3456
Ratingdashdashdashdash

Rates of Improvement Specified

Grade3456
Ratingdashdashdashdash

End-of-Year Benchmarks

Grade3456
RatingFull bubbleFull bubbleFull bubbleFull bubble

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

a. Specify the end-of-year performance standards:

Three end-of-year performance standards are specified: Well Below Benchmark, Below Benchmark, and At or Above Benchmark. These standards are used to indicate increasing odds of achieving At or Above Benchmark at the next benchmark administration period.

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced.

The DIBELS Next benchmark goals provide targeted levels of skill that students need to achieve by specific times to be considered to be making adequate progress. In developing benchmark goals, our focus was on general adequate reading skills, and is not specific to a particular state assessment, published reading test, or national assessment. A student with adequate reading skills should read adequately regardless of the specific assess­ment that is used. In the 2007 National Assessment of Educational Progress, 34% of students who scored below the level of reading skills were judged to be Basic, and 68% of students who scored below the level were judged to be Proficient. According to the NAEP, “Basic denotes partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at a given grade (Daane et al., 2005, p. 18).” Thus, students who score at the 40th percentile or above on a high-quality, nationally norm-referenced test are likely to be rated Basic or above on the NAEP and can be con­sidered to have adequate reading skills.

DIBELS Next benchmark goals are empirically derived, criterion-referenced target scores that represent adequate reading progress. The cut-points for risk indicate a level of skill below which the student is unlikely to achieve a subsequent reading goal without receiving additional, targeted instructional support.

Daane, M.C., Campbell, J.R., Grigg, W.S., Goodman, M.J., & Oranje, A. (2005). Fourth-Grade Students Reading Aloud: NAEP 2002 Special Study of Oral Reading (NCES 2006–469). U.S. Department of Education. Institute of Education Sciences, National Center for Education Statistics. Washington, DC: Government Printing Office. Available http://nces.ed.gov/nationsreportcard/pdf/studies/2006469.pdf. Accessed 6/22/2010.

c. Specify the benchmarks:

Grade

Score Level

Likely Need for Support

End of Year

Third

At or Above Benchmark

Likely to Need Core Support

19+

Below Benchmark

Likely to Need Strategic Support

14-18

Well Below Benchmark

Likely to Need Intensive Support

0-13

Fourth

At or Above Benchmark

Likely to Need Core Support

24+

Below Benchmark

Likely to Need Strategic Support

20-23

Well Below Benchmark

Likely to Need Intensive Support

0-19

Fifth

At or Above Benchmark

Likely to Need Core Support

24+

Below Benchmark

Likely to Need Strategic Support

18-23

Well Below Benchmark

Likely to Need Intensive Support

0-17

Sixth

At or Above Benchmark

Likely to Need Core Support

21+

Below Benchmark

Likely to Need Strategic Support

15-20

Well Below Benchmark

Likely to Need Intensive Support

0-14

The benchmark goal is the number provided in the At or Above Benchmark row. The cut point for risk is the first number provided in the Below Benchmark row.

d. Basis for specifying these benchmarks?

Criterion-referenced.

In our benchmark goal study, we used the 40th percentile or above on the Group Reading Assessment and Diagnostic Evaluation (GRADE, Williams, 2001) as an indicator that the student was making adequate progress in acquisition of important early reading and/or reading skills.

For more information about the DIBELS Next Benchmark Goals, see Chapter 4 of the DIBELS Next Technical Manual.
Good, R. H.; Kaminski, Ruth; Dewey, Elizabeth; Walin, Joshua; Powell-Smith, Kelly; Latimer, Rachael (2013). DIBELS Next Technical Manual, Eugene, OR; Dynamic Measurement Group, Inc.
Williams, K. T. (2001). Group Reading and Diagnostic Evaluation (GRADE). New York: Pearson. 

Procedure for specifying benchmarks for end-of-year performance levels:

The principle vision for DIBELS is a step-by-step vision. Student skills at or above benchmark at the beginning of the year put the odds in favor of the student achieving the middle-of-year benchmark goal. In turn, students with skills at or above benchmark in the middle of the year have the odds in favor of achieving the end-of-year benchmark goal. Finally, students with skills at or above benchmark at the end of the year have the odds in favor of adequate reading skills on a wide, general variety of external measures of reading proficiency.  Our fundamental logic for developing the benchmark goals and cut points for risk was to begin with the external outcome goal and work backward in that step- by-step system. We first obtained an external criterion measure (the GRADE Total Test Raw Score) at the end of the year with a level of performance that would represent adequate reading skills. Next we specified the benchmark goal and cut point for risk on the end-of-year DIBELS Composite Score with respect to the end-of-year external criterion. Then, using the DIBELS Composite end-of-year goal as an internal criterion, we established the benchmark goals and cut points for risk on the middle-of-year DIBELS Composite Score. Finally, we established the benchmark goals and cut points for risk on the beginning-of-year DIBELS Composite Score using the middle-of-year DIBELS Composite Score as an internal criterion. Once the benchmark goals and cut points for risk were established for the DIBELS Composite Score, they were used to establish the specific goals and cut points for risk for each individual DIBELS Next measure. The same step-by-step procedures were used for the individual measures.

Sensitive to Student Improvement

Grade3456
RatingFull bubbleFull bubbleFull bubbleFull bubble

1. Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average).

Slopes on the progress-monitoring tool are significantly greater than zero; the slopes are significantly different for special education students vs. low-achieving vs. average-achieving vs. high-achieving students. 

Grade

All Sample

Special Ed

Non Special Ed

n

Slope

SE

n

Slope

SE

n

Slope

SE

3

1,491

1.19

0.02

186

0.87

0.06

1,039

1.22

0.03

4

456

1.59

0.05

10

1.32

0.29

221

1.31

0.06

5

376

1.31

0.05

19

0.47

0.14

83

0.84

0.08

6

10

0.55

0.12

NA

NA

NA

NA

NA

NA

 

Grade

High Achieving (ISTEP+)

Average Achieving
(ISTEP+)

Low Achieving
(ISTEP+)

n

Slope

SE

n

Slope

SE

n

Slope

SE

3

491

5.00

0.16

2,650

3.07

0.05

601

1.26

0.06

 

Decision Rules for Changing Instruction

Grade3456
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Decision Rules for Increasing Goals

Grade3456
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Improved Student Achievement

Grade3456
Ratingdashdashdashdash

Improved Teacher Planning

Grade3456
RatingEmpty bubbledashdashdash