DIBELS Next

Area: Nonsense Word Fluency

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

Amplify: The basic pricing plan is an annual per student license of $14.90. For users already using an mCLASS assessment product, the cost per student to add mCLASS:DIBELS Next is $6 per student. 

Sopris: There are three purchasing options for implementing Progress Monitoring materials in Year 1:

1) Progress Monitoring via Online Test Administration and Scoring

2) Progress Monitoring materials as part of the purchase of Classroom Sets, which also include Benchmark materials and DIBELS Next Survey

3) Individual Progress Monitoring materials.

DIBELS Next Classroom Sets contain everything needed for one person to conduct the Benchmark Assessment for 25 students and the Progress Monitoring Assessment for up to five students. These easy-to-implement kits simplify the distribution and organization of DIBELS Next materials.

DMG: Materials may be downloaded at no cost from DMG at http://dibels.org/next. Minimal reproduction costs associated with printing.

Testers will require 4-8 hours of training. Examiners must at a minimum be a paraprofessional.

Training manuals and materials are field tested and are included in the cost of the tool.

Amplify’s Customer Care Center offers complete user-level support from 7:00 a.m. to 7:00 p.m. EST, Monday through Friday. Customers may contact a customer support representative via telephone, e-mail, or electronically through the mCLASS website. Additionally, customers have self-service access to instructions, documents, and frequently asked questions on our Website.  The research staff and product teams are available to answer questions about the content within the assessments.

Accommodations:

DIBELS Next is an assessment instrument well-suited for use with capturing the developing reading skills of special education students learning to read, with a few exceptions: a) students who are deaf; b) students who have fluency-based speech disabilities, e.g., stuttering, oral apraxia; c) students who are learning to read in a language other than English or Spanish; d) students with severe disabilities.  Use of DIBELS Next is appropriate for all other students, including those in special education for whom reading connected text is an IEP goal. For students receiving special education, it may be necessary to adjust goals and timelines. Approved accommodations are available in the administration manual.

Where to obtain:

Amplify Education, Inc.
55 Washington Street, Suite 900
Brooklyn, NY 11201
1-800-823-1969, option 1
www.amplify.com

Sopris Learning.
17855 Dallas Parkway, Suite 400, Dallas, TX 75287-6816
http://www.soprislearning.com

DMG
859 Willamette Street, Suite 320, Eugene, OR 97401
541-431-6931
(888) 399-1995
http://dibels.org

DIBELS Next measures are brief, powerful indicators of foundational early literacy skills that: are quick to administer and score; serve as universal screening (or benchmark assessment) and progress monitoring; identify students in need of intervention support; evaluate the effectiveness of interventions; and support the RtI/Multi-tiered model. DIBELS Next comprises six measures: First Sound Fluency (FSF), Letter Naming Fluency (LNF), Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF), DIBELS Oral Reading Fluency (DORF), and Daze. 

Nonsense Word Fluency (NWF) is a brief, direct measure of the alphabetic principle and basic phonics. It assesses knowledge of basic letter-sound correspondences and the ability to blend letter sounds into consonant- vowel-consonant (CVC) and vowel-consonant (VC) words. The test items used for NWF are phonetically regular make-believe (nonsense or pseudo) words.

Administration of the test takes 1 minute and should be administered in an individual setting.

There are 20 alternate forms per measure.

Raw scores and developmental benchmarks are available. Raw scores, cut points, and benchmark goals are all grade-specific but are not strictly based on grade norms.

 

Reliability of the Performance Level Score: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient Median

SEM

Alternate-form

K

1093

0.84

5.82

Alternate-form

1

28

0.85

12.59

Alternate-form

2

718

0.82

5.70

Inter-rater

K

25

0.99

NA

Inter-rater

1

25

0.99

NA

Inter-rater

2

25

0.90

NA

Information (including normative data) / Subjects:

Alternate-form: Participants included students in kindergarten through second grade from 634 districts. This data was from a sample that was approximately 45% White, 20% African American, and 27% Hispanic. See DIBELS Next Technical Adequacy Brief for more information.

Inter-rater: A total of 3,676 students from ten schools were eligible to participate. Of these, 264 students across all grades were randomly selected in five schools for shadow-scoring practices. All DIBELS Next measures were included in this portion of the study.

 

Reliability of the Slope: Partially Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient Median

SEM

HLM

K

779

0.86

5.65

HLM

1

15,214

0.87

3.56

HLM

2

1,555

0.83

 

5.54

Information (including normative data) / Subjects:

Data was collected during the 2013-2014 school year and included 21,157 students in kindergarten through second grade from 2,196 schools within 634 districts. The sample was approximately 45% White, 20% African American, and 27% Hispanic. For further information on this study, see the DIBELS Next Technical Adequacy Brief.

 

Validity of the Performance Level Score: Partially Convincing Evidence

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient Median

Predictive

K

GRADE Total

170

0.47

Concurrent

K

GRADE Total

170

0.40

Predictive

1

GRADE Total

195

0.51

Concurrent

1

GRADE Total

195

0.56

Predictive

2

GRADE Total

214

0.51

Information (including normative data) / Subjects:

Validity with GRADE Total Test: Participants included students from thirteen schools base across five states. For further information on this study, see the DIBELS Next Technical Adequacy Brief and the DIBELS Next Technical Manual.

 

Predictive Validity of the Slope of Improvement: Data Unavailable

Disaggregated Reliability and Validity Data: Unconvincing Evidence

Disaggregated Reliability of Slope

Type of Reliability

Subgroup

Age or Grade

n (range)

Coefficient Median

HLM

Caucasian

K

1,024

0.73

HLM

African American

K

225

0.74

HLM

Hispanic

K

252

0.66

HLM

Caucasian

1

4,552

0.84

HLM

African American

1

979

0.84

HLM

Hispanic

1

1,086

0.84

Information (including normative data) / Subjects:

Reliability of slope was computed using data from the 2011-2012 school year. Kindergarten sample demographics: 9% subsidized lunch, 7% SPED, and 7% ELL. First grade sample demographics: 26% subsidized lunch, 9% SPED, and 7% ELL. Weekly assessments over 12 months: Kindergarten: 6 - 26 assessments; mean = 8.85 and first grade: 6 - 47 assessments; mean = 10.68.

Alternate Forms: Partially Convincing Evidence

1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

The word pool for Nonsense Word Fluency consists of CVC (consonant-vowel-consonant) and VC (vowel-consonant) nonsense words. The letters “q” and “x” were not used, since they typically represent more than one phoneme. The letters “h”, “w”, “y”, and “r” were used only in the initial position, and the letters “c” and “g” were used only in the final position. Real words and words that sounded like inappropriate words were excluded, but words that sounded like real words were not excluded. The words were generated automatically in Microsoft Excel, and the excluded words were identified manually. The final word pool included a total of 1,017 items, two of which were used as example items and so do not appear as test items. The words were then divided into six difficulty categories based on the pattern (CVC and VC) and on the relative difficulty of the consonants. The consonants judged to be easier were b, c, d, f, g, h, k, l, m, n, p, r, s, and t. Letters were judged to be easier if they appear more often in words, since students will see them more often and many curricula teach higher frequency letters first. The categories were:

Difficulty Category

Number and Percent of Items per Form

Total Items in Word Pool

VC,  Easy Consonant

10%,  5 items per form

44

VC,  Hard Consonant

4%,  2 items per form

11

CVC,  First Consonant Easy

20%, 10 items per form

163

CVC,  Last Consonant Easy

20%, 10 items per form

247

CVC,  Both Consonants Easy

40%, 20 items per form

483

CVC,  Both Consonants Hard

6%, 3 items per form

69

 

Each form consists of 50 items. Before creating the individual forms, a stratified sequence of the different difficulty categories was developed. For categories with 10 items on a form, one item appeared on each of the 10 rows. For the category with 20 items on a form, two items appear on each of the 10 rows. The other categories were randomly distributed across the rows. Within a row, the order of the difficulty categories was random, except the first two items on a form were selected from two of the easier categories (CVC with both consonants easy, and CVC with the first consonant easy). Once the sequence was determined, that same stratification was applied to all forms, so that the same difficulty categories appear in the same locations on every form. In addition to the stratification of the difficulty categories, each row of five items includes one nonsense word with each of the five vowels, in random order. The order of the vowels was re-randomized for each row and each form. Each word on a form was then randomly selected from the words that matched both the specified difficulty category and the specified vowel.

The progress monitoring forms and benchmark assessment forms for Nonsense Word Fluency are equivalent parallel forms constructed under the strict guidelines outlined above and are interchangeable.

During the DIBELS Next benchmark goals study, a randomly selected progress monitoring form was administered to 27 kindergarten students and 28 first-grade students approximately 2 weeks after middle-of-year benchmark assessment. Three-form alternate-form reliability coefficients ranged from 0.88 to 0.97 (see page 87 in the DIBELS Next Technical Manual).

2. Number of alternate forms of equal and controlled difficulty:

20 alternate forms.

 

Sensitive to Student Improvement: Convincing Evidence

Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average).

NWF-CLS: Slopes on the progress-monitoring tool are significantly greater than zero; the slopes are significantly different for special education students vs. no special education students; and the slopes are significantly greater when effective practices (e.g., identified with high fidelity implementation) are in place.

 Grade

n

Slope

SE

n

Slope

SE

n

Slope

SE

 

 

 

 

 

 

 

 

 

 

 

All Sample

Special Ed

Non Special Ed

K

2497

3.04

0.05

166

2.56

0.18

987

3.04

0.07

1

9488

5.03

0.04

742

3.93

0.13

3912

5.28

0.06

 

 

 

 

 

 

 

 

 

   

Significance
Test

 

High Fidelity*

Low Fidelity*

Special Ed
vs. not

High vs.
low fidelity

 

K

25144

8.90

0.07

6692

6.01

0.08

Yes

Yes

 

1

27900

13.14

0.07

3876

12.93

0.17

Yes

Yes

 

High fidelity of implementation was defined by selecting students that have scores from all the three benchmark periods and have been progress monitored more than six times if they were at risk at the beginning of the year.

End-of-Year Benchmarks: Convincing Evidence

Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

Specify the end-of-year performance standards:

Three end-of-year performance standards are specified: Well Below Benchmark, Below Benchmark, and At or Above Benchmark. These standards are used to indicate increading odds of achieving At or Above Benchmark status at the next benchmark administration.

What is the basis for specifying minimum acceptable end-of-year performance?

Criterion-Referenced                    

Specify the benchmarks:

Grade

Score Level

Likely Need for Support

End of Year

Correct Letter Sounds

Kindergarten

At or Above Benchmark

Likely to Need Core Support

28+

Below Benchmark

Likely to Need Strategic Support

15-27

Well Below Benchmark

Likely to Need Intensive Support

0-14

First Grade

At or Above Benchmark

Likely to Need Core Support

58+

Below Benchmark

Likely to Need Strategic Support

47-57

Well Below Benchmark

Likely to Need Intensive Support

0-46

 

 

 

*Beginning of Year

Second Grade

At or Above Benchmark

Likely to Need Core Support

54+

Below Benchmark

Likely to Need Strategic Support

35-53

Well Below Benchmark

Likely to Need Intensive Support

0-52

The benchmark goal is the number provided in the At or Above Benchmark row. The cut point for risk is the first number provided in the Below Benchmark row. NWF - CLS only assesses through the beginning of the year in second grade.

What is the basis for specifying these benchmarks?

Criterion-Referenced

If criterion-referenced, describe procedure for specifying benchmarks for end-of-year performance levels:

The principle vision for DIBELS is a step-by-step vision. Student skills at or above benchmark at the beginning of the year put the odds in favor of the student achieving the middle-of-year benchmark goal. In turn, students with skills at or above benchmark in the middle of the year have the odds in favor of achieving the end-of-year benchmark goal. Finally, students with skills at or above benchmark at the end of the year have the odds in favor of adequate reading skills on a wide, general variety of external measures of reading proficiency.  Our fundamental logic for developing the benchmark goals and cut points for risk was to begin with the external outcome goal and work backward in that step- by-step system. We first obtained an external criterion measure (the GRADE Total Test Raw Score) at the end of the year with a level of performance that would represent adequate reading skills (the GRADE Total Test Raw Score at the 40th percentile rank). Next we specified the benchmark goal and cut point for risk on end-of-year NWF CLS with respect to the end-of-year external criterion. Then, using the NWF CLS end-of-year goal as an internal criterion, we established the benchmark goal and cut point for risk on middle-of-year NWF CLS. Finally, we established the benchmark goal and cut point for risk on the beginning-of-year NWF CLS using the middle-of-year NWF CLS goal as an internal criterion. Further information on the benchmark goals and cut points for risk is available in the DIBELS Next Technical Manual

Rates of Improvement Specified: Convincing Evidence

1. Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials? 

Yes.

a. Specify the growth standards:  Using DIBELS Pathways of Progress™, the growth standards depend on the student’s beginning-of-year performance relative to students with similar levels of initial skills, i.e., student performance is only compared to other students who have the same beginning-of-year score. Student scores above the 80th percentile are considered Well Above Typical progress. Student scores between the 60th to 79th percentiles are considered Above Typical progress. Student scores between the 40th to 59th percentiles are considered Typical progress. And student scores between the 20th to 39th percentiles are considered Below Typical progress. Student scores below the 20th percentile are considered Well Below Typical progress.

b. Basis for specifying minimum acceptable growth: 

DIBELS Pathways of Progress, DIBELS Next Survey, and best practice procedures based on widely accepted research are suggested for setting attainable, ambitious, and meaningful progress monitoring goals. The methods are described in greater detail below, and documentation is attached.

The basis for specifying minimum acceptable growth is based on DIBELS Pathways of Progress, a research- based tool for (a) establishing individual student progress monitoring goals, (b) evaluating individual student progress, and (c) evaluating the effectiveness of support at the classroom, school, or district level. DIBELS Pathways of Progress are based on student progress percentiles.

By observing a student’s current skills and later benchmark goals, we are able to set meaningful goals for the student that will either achieve or increase the odds of achieving subsequent goals. Pathways of Progress emphasizes the end point of the pathway and provides a normative framework for comparison in setting goals and evaluating individual student progress. Student progress is evaluated relative to the student’s peers, that is, growth is compared to students with similar initial skills at the same grade level on the same material. Progress that is typical or above typical is considered attainable progress. Progress that is above typical or well-above typical can be considered ambitious progress.

Pathways of Progress became available in the Spring of 2013 to all customers who use DIBELSnet®  (https://dibels.net) data reporting service for DIBELS Next data and is being implemented in partner data management systems, mCLASS and VPORT.

More information about DIBELS Pathways of Progress and illustrated examples of how the tool is used can be found in Good, Powell-Smith, and Dewey (2013).

Good, R. H., Powell-Smith, K. A., Dewey, E. N. (2013). DIBELS Pathways of Progress: Setting Ambitious, Meaningful, and Attainable Goals in Grade Level Material. Presented at the Pacific Coast Research Conference. San Diego: CA. Available at http://dibels.org/papers/Pathways_Handouts_PCRC2013.pdf.

Other procedures for specifying adequate growth:

The procedures for Pathways of Progress and DIBELS Next Survey are described.

DIBELS Next Pathways of Progress is available to users of DMG’s data reporting service, DIBELSnet®, and is being implemented in partner data management systems, mCLASS and VPORT.

To derive the pathways, data on approximately 163,000 students in kindergarten through sixth grade from 502 schools within 164 school districts from across the United States was exported from users who entered their data into Dynamic Measurement Group’s data system, DIBELSnet. The sample was approximately 60% white, 23% Hispanic, and 7% Black with a free-reduced lunch rate of 35%. Of this larger sample, approximately 92,000 students from kindergarten through second grade recorded scores on the Daze measure.

DIBELS Next Pathways of Progress is derived from a multi-step process.

1.     For both Kindergarten and 1st Grade, students are grouped together by their beginning-of-year DIBELS Composite Score.

2.     For each beginning-of-year DIBELS Composite Score, the 80th, 60th, 40th, and 20th quantiles for end-of-year NWF CLS and NWF WWR were calculated, respectively.

3.     Using spline regression, we created a series of prediction expressions that modeled each end-of-year NWF CLS and NWF WWR quantile based on the beginning-of-year DIBELS Composite Score.

4.     The spline prediction expressions represent the four different outcome levels (one for each quantile), and each outcome level represents the end-point of a pathway of progress border.

Within these borders, we define the rates of progress as follows:

Quantile Range

Definition of Rate of Progress

Above 79%

Well-Above Typical Progress

60% - 79%

Above Typical Progress

40% - 59%

Typical Progress

20% - 39%

Below Typical Progress

Below 20%

Well-Below Typical Progress

For those who do not use DIBELSnet, progress monitoring goals can still be made. A reasonable goal to use is the next time-point’s benchmark goal, considering the amount of time the student would likely need, the logistics of who will teach using what instructional materials, when and where, and finally a measurement plan to evaluate progress.

Teachers and administrators can set progress monitoring goals for out-of-grade progress monitoring using DIBELS Next Survey (Powell-Smith, Kaminski, & Good, 2011).

1.     Determine a student’s current level of performance using DIBELS Next Survey.

2.     Determine the goal based on the progress monitoring level and the end-of-year benchmark goals for that level.

3.     Set the goal date so that the goal is achieved in half the time in which it would typically be achieved (e.g., for a first-grade student whose beginning-of-year level of performance is closer to kindergarten, move the end-of-year kindergarten benchmark goals to be achieved by the middle-of-year first-grade benchmark time.

4.     Draw an aimline connecting the current performance to the goal.

For those who do not use DIBELS Next Survey, out-of-grade level progress monitoring goals still need to be examined individually. In this case, adequate progress is defined as moving at least two recommendation levels (e.g., intensive to benchmark) within that instructional grade level or one grade level (e.g., strategic to intensive or intensive to benchmark) within that instructional grade level or one grade level (e.g., strategic at second-grade level to strategic at third-grade level) (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation.  In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Powell-Smith, K. A., Kaminski, R. A., & Good, R. H. (2011). DIBELS Survey Beta (Technical Report 8). Eugene, OR: Dynamic Measurement Group. Available: at https://dibels.org/papers/SurveyBetaTechReport.pdf

Decision Rules for Changing Instruction: Unconvincing Evidence

Specification of validated decision rules for when changes to instruction need to be made: 

We recommend using a goal-oriented rule for evaluating a student’s response to intervention that is straightforward for teachers to understand and use. Decisions about a student’s progress are based on comparisons of DIBELS scores that are plotted on a graph and the aimline, or expected rate of progress. We suggest that educational professionals consider instructional modifications when student performance falls below the aimline for three consecutive points (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Evidentiary basis for these decision rules: 

This recommended decision rule is based on early work with CBM (Fuchs, 1988, 1989) and precision teaching (White & Haring, 1980) and allows for a minimum of three data points to be gathered before any decision is made. As when validating a student’s need for support, a pattern of performance is considered before making individual student decisions (Kaminski, Cummings, Powell-Smith, and Good, 2008)..

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation.  In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Education Research, 81, 294-304.

Fuchs, L. S. (1989). Evaluation solutions: Monitoring progress and vising intervention plans. In M. Shinn (Ed.), Curriculum-based management: Assessing special children.  New York: Guilford Press.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd Ed.) Columbus, OH: Merrill. 

Decision Rules for Increasing Goals: Unconvincing Evidence

Specification of validated decision rules for when increases in goals need to be made:

In general, it is recommended that support be continued until a student achieves at least three points at or above the goal. If a decision is made to discontinue support, it is recommended that progress monitoring be continued weekly for at least 1 month to ensure that the student is able to maintain growth without the supplemental support. The frequency of progress monitoring will be faded gradually as the child’s progress continues to be sufficient (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Evidentiary basis for these decision rules:

This recommended decision rule is based on early work with CBM (Fuchs, 1988, 1989) and precision teaching (White & Haring, 1980) and allows for a minimum of three data points to be gathered before any decision is made. As when validating a student’s need for support, a pattern of performance is considered before making individual student decisions (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation.  In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Education Research, 81, 294-304.

Fuchs, L. S. (1989). Evaluation solutions: Monitoring progress and vising intervention plans. In M. Shinn (Ed.), Curriculum-based management: Assessing special children.  New York: Guilford Press.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd Ed.) Columbus, OH: Merrill. 

Improved Student Achievement: Data Unavailable

Improved Teacher Planning Data Unavailable