DIBELS Next

Area: Phoneme Segmentation Fluency

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

Amplify: The basic pricing plan is an annual per student license of $14.90. For users already using an mCLASS assessment product, the cost per student to add mCLASS:DIBELS Next is $6 per student. 

Sopris: There are three purchasing options for implementing Progress Monitoring materials in Year 1:

1) Progress Monitoring via Online Test Administration and Scoring

2) Progress Monitoring materials as part of the purchase of Classroom Sets, which also include Benchmark materials and DIBELS Next Survey

3) Individual Progress Monitoring materials.

DIBELS Next Classroom Sets contain everything needed for one person to conduct the Benchmark Assessment for 25 students and the Progress Monitoring Assessment for up to five students. These easy-to-implement kits simplify the distribution and organization of DIBELS Next materials.

DMG: Materials may be downloaded at no cost from DMG at http://dibels.org/next. Minimal reproduction costs associated with printing.

Testers will require 4-8 hours of training. Examiners must at a minimum be a paraprofessional.

Training manuals and materials are field tested and are included in the cost of the tool.

Amplify’s Customer Care Center offers complete user-level support from 7:00 a.m. to 7:00 p.m. EST, Monday through Friday. Customers may contact a customer support representative via telephone, e-mail, or electronically through the mCLASS website. Additionally, customers have self-service access to instructions, documents, and frequently asked questions on our Website.  The research staff and product teams are available to answer questions about the content within the assessments.

Accommodations:

DIBELS Next is an assessment instrument well-suited for use with capturing the developing reading skills of special education students learning to read, with a few exceptions: a) students who are deaf; b) students who have fluency-based speech disabilities, e.g., stuttering, oral apraxia; c) students who are learning to read in a language other than English or Spanish; d) students with severe disabilities.  Use of DIBELS Next is appropriate for all other students, including those in special education for whom reading connected text is an IEP goal. For students receiving special education, it may be necessary to adjust goals and timelines. Approved accommodations are available in the administration manual.

Where to obtain:

Amplify Education, Inc.
55 Washington Street, Suite 900
Brooklyn, NY 11201
1-800-823-1969, option 1
www.amplify.com

Sopris Learning.
17855 Dallas Parkway, Suite 400, Dallas, TX 75287-6816
http://www.soprislearning.com

DMG
859 Willamette Street, Suite 320, Eugene, OR 97401
541-431-6931
(888) 399-1995
http://dibels.org

DIBELS Next measures are brief, powerful indicators of foundational early literacy skills that: are quick to administer and score; serve as universal screening (or benchmark assessment) and progress monitoring; identify students in need of intervention support; evaluate the effectiveness of interventions; and support the RtI/Multi-tiered model. DIBELS Next comprises six measures: First Sound Fluency (FSF), Letter Naming Fluency (LNF), Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF), DIBELS Oral Reading Fluency (DORF), and Daze. 

Phoneme Segmentation Fluency (PSF) is a brief, direct measure of phonemic awareness. PSF assesses the student’s fluency in segmenting a spoken word into its component parts or sound segments.

Administration of the test takes 1 minute and is recommended to be administered in an individual setting.

There are 20 alternate forms per measure

Raw scores and developmental benchmarks are available. Raw scores are provided as the reading level of the student. Cut Points for each proficiency level are provided. Developmental benchmarks for each measure, grade, and time of year (beginning, middle, end) report each score as Above Proficient, Proficient, Below Proficient, Far Below Proficient. Raw scores, cut points, and benchmark goals are all grade-specific but are not strictly based on grade norms.

 

Reliability of the Performance Level Score: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Three-form Alternate-form

K

29

 

0.70

10.12

Participants were a stratified random sample drawn from thirteen schools across five states based on beginning of year DIBELS performance

Three-form Alternate-form

1

164

 

0.78

6.51

Three schools across two districts participated. Schools involved are located in one state in the East North Central region of the United States.

Three-form reliability estimates are provided to correspond to the recommended DIBELS practice of examining a pattern of performance on repeated assessments for increased confidence in decisions. The reliability of three-form aggregates is estimated using the Spearman-Brown Prophecy Formula.

 

Reliability of the Slope: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

HLM

K

2,655

 

0.72

 

Reliability of slope was computed using data from school year 2011-2012. 14% African American, 15% Hispanic, 3% Asian or Pacific Islander, 8% multi-race; 14% subsidized lunch; 11% special education; 11% English as second language. Weekly assessments over 12 months (i.e., 6-28 assessments; mean=9.17).

HLM

1

1,822

 

0.60

 

Reliability of slope was computed using data from school year 2011-2012. 9% African American, 10% Hispanic, 2% Asian or Pacific Islander, 4% multi-race; 32% subsidized lunch; 14% special education; 6% English as second language. Weekly assessments over 12 months (i.e., 6-28 assessments; mean=8.05).

 

Validity of the Performance Level Score: Unconvincing Evidence

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Predictive

K

GRADE Total Test

170

 

0.34

Participants were a stratified random sample drawn from thirteen schools across five states based on beginning of year DIBELS performance

Concurrent

K

GRADE Total Test

170

 

0.24

Predictive

1

GRADE Total Test

193

 

0.33

 

Predictive Validity of the Slope of Improvement: Data Unavailable

Disaggregated Reliability and Validity Data: Unconvincing Evidence

Disaggregated Reliability of the Slope

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

HLM (Caucasian)

K

1,052

 

0.69

 

Reliability of slope was computed using data from school year 2011-2012. 14% subsidized lunch; 11% special education; 11% English as second language. Weekly assessments over 12 months (i.e., 6-28 assessments; mean=9.17).

HLM (African American)

K

311

 

0.76

 

HLM (Hispanic)

K

374

 

0.76

 

HLM (Caucasian)

1

837

 

0.58

 

Reliability of slope was computed using data from school year 2011-2012. 32% subsidized lunch; 14% special education; 6% English as second language. Weekly assessments over 12 months (i.e., 6-28 assessments; mean=8.05).

HLM (African American)

1

161

 

0.66

 

HLM (Hispanic)

1

170

 

0.66

 

 

Alternate Forms: Partially Convincing Evidence

Sensitive to Student Improvement: Convincing Evidence

1. Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average).

Slopes on the progress-monitoring tool are significantly greater than zero; the slopes are significantly different for special-education vs. non-special-education students.  

Grade

All Sample

Special Ed

Non Special Ed

n

Slope

SE

n

Slope

SE

n

Slope

SE

K

2,521

4.25

0.05

226

3.50

0.17

1,236

4.16

0.07

1

1,718

3.33

0.06

211

2.97

0.15

631

3.19

0.09

 

End-of-Year Benchmarks: Convincing Evidence

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

a. Specify the end-of-year performance standards: Three end-of-year performance standards are specified: Well Below Benchmark, Below Benchmark, and At or Above Benchmark. These standards are used to indicate increasing odds of achieving At or Above Benchmark at the next benchmark administration period.

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced.

The DIBELS Next benchmark goals provide targeted levels of skill that students need to achieve by specific times to be considered to be making adequate progress. In developing benchmark goals, our focus was on general adequate reading skills, and is not specific to a particular state assessment, published reading test, or national assessment. A student with adequate reading skills should read adequately regardless of the specific assessment that is used. In the 2007 National Assessment of Educational Progress, 34% of students who scored below the level of reading skills were judged to be Basic, and 68% of students who scored below the level were judged to be Proficient. According to the NAEP, “Basic denotes partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at a given grade (Daane et al., 2005, p. 18).” Thus, students who score at the 40th percentile or above on a high-quality, nationally norm-referenced test are likely to be rated Basic or above on the NAEP and can be considered to have adequate reading skills.

DIBELS Next benchmark goals are empirically derived, criterion-referenced target scores that represent adequate reading progress. The cut-points for risk indicate a level of skill below which the student is unlikely to achieve a subsequent reading goal without receiving additional, targeted instructional support.

Daane, M.C., Campbell, J.R., Grigg, W.S., Goodman, M.J., & Oranje, A. (2005). Fourth-Grade Students Reading Aloud: NAEP 2002 Special Study of Oral Reading (NCES 2006–469). U.S. Department of Education. Institute of Education Sciences, National Center for Education Statistics. Washington, DC: Government Printing Office. Available http://nces.ed.gov/nationsreportcard/pdf/studies/2006469.pdf. Accessed 6/22/2010.

c. Specify the benchmarks:

Grade

Score Level

Likely Need for Support

End of Year

Kindergarten

At or Above Benchmark

Likely to Need Core Support

40+

Below Benchmark

Likely to Need Strategic Support

25-39

Well Below Benchmark

Likely to Need Intensive Support

0-24

First Grade

At or Above Benchmark

Likely to Need Core Support

NA

Below Benchmark

Likely to Need Strategic Support

NA

Well Below Benchmark

Likely to Need Intensive Support

NA

 

d. Basis for specifying these benchmarks?

Criterion-referenced.

In our benchmark goal study, we used the 40th percentile or above on the Group Reading Assessment and Diagnostic Evaluation (GRADE, Williams, 2001) as an indicator that the student was making adequate progress in acquisition of important early reading and/or reading skills.

For more information about the DIBELS Next Benchmark Goals, see Chapter 4 of the DIBELS Next Technical Manual.

Good, R. H.; Kaminski, Ruth; Dewey, Elizabeth; Walin, Joshua; Powell-Smith, Kelly; Latimer, Rachael (2013). DIBELS Next Technical Manual, Eugene, OR; Dynamic Measurement Group, Inc.

Williams, K. T. (2001). Group Reading and Diagnostic Evaluation (GRADE). New York: Pearson.

Procedure for specifying benchmarks for end-of-year performance levels: The principle vision for DIBELS is a step-by-step vision. Student skills at or above benchmark at the beginning of the year put the odds in favor of the student achieving the middle-of-year benchmark goal. In turn, students with skills at or above benchmark in the middle of the year have the odds in favor of achieving the end-of-year benchmark goal. Finally, students with skills at or above benchmark at the end of the year have the odds in favor of adequate reading skills on a wide, general variety of external measures of reading proficiency.  Our fundamental logic for developing the benchmark goals and cut points for risk was to begin with the external outcome goal and work backward in that step- by-step system. We first obtained an external criterion measure (the GRADE Total Test Raw Score) at the end of the year with a level of performance that would represent adequate reading skills (the 40th percentile). Next we specified the benchmark goal and cut point for risk on the end-of-year DIBELS Composite Score with respect to the end-of-year external criterion. Then, using the DIBELS Composite end-of-year goal as an internal criterion, we established the benchmark goals and cut points for risk on end-of-year PSF. Finally, we established the benchmark goals and cut points for risk on the middle-of-year PSF using end-of-year PSF as an internal criterion. 

Rates of Improvement Specified: Convincing Evidence

Decision Rules for Changing Instruction: Unconvincing Evidence

Specification of validated decision rules for when changes to instruction need to be made:  We recommend using a goal-oriented rule for evaluating a student’s response to intervention that is straightforward for teachers to understand and use. Decisions about a student’s progress are based on comparisons of DIBELS scores that are plotted on a graph and the aimline, or expected rate of progress. We suggest that educational professionals consider instructional modifications when student performance falls below the aimline for three consecutive points (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Evidentiary basis for these decision rules: This recommended decision rule is based on early work with CBM (Fuchs, 1988, 1989) and precision teaching (White & Haring, 1980) and allows for a minimum of three data points to be gathered before any decision is made. As when validating a student’s need for support, a pattern of performance is considered before making individual student decisions (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation. In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Education Research, 81, 294-304.

Fuchs, L. S. (1989). Evaluation solutions: Monitoring progress and vising intervention plans. In M. Shinn (Ed.), Curriculum-based management: Assessing special children.  New York: Guilford Press.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd Ed.) Columbus, OH: Merrill.

Decision Rules for Increasing Goals: Unconvincing Evidence

Specification of validated decision rules for when increases in goals need to be made: : In general, it is recommended that support be continued until a student achieves at least three points at or above the goal. If a decision is made to discontinue support, it is recommended that progress monitoring be continued weekly for at least 1 month to ensure that the student is able to maintain growth without the supplemental support. The frequency of progress monitoring will be faded gradually as the child’s progress continues to be sufficient (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Evidentiary basis for these decision rules: This recommended decision rule is based on early work with CBM (Fuchs, 1988, 1989) and precision teaching (White & Haring, 1980) and allows for a minimum of three data points to be gathered before any decision is made. As when validating a student’s need for support, a pattern of performance is considered before making individual student decisions (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation. In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Education Research, 81, 294-304.

Fuchs, L. S. (1989). Evaluation solutions: Monitoring progress and vising intervention plans. In M. Shinn (Ed.), Curriculum-based management: Assessing special children.  New York: Guilford Press.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd Ed.) Columbus, OH: Merrill.

Improved Student Achievement: Data Unavailable

Improved Teacher Planning Data Unavailable