DIBELS Next

Area: First Sound Fluency

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

Amplify: The basic pricing plan is an annual per student license of $14.90. For users already using an mCLASS assessment product, the cost per student to add mCLASS:DIBELS Next is $6 per student. 

Sopris: There are three purchasing options for implementing Progress Monitoring materials in Year 1:

1) Progress Monitoring via Online Test Administration and Scoring

2) Progress Monitoring materials as part of the purchase of Classroom Sets, which also include Benchmark materials and DIBELS Next Survey

3) Individual Progress Monitoring materials.

DIBELS Next Classroom Sets contain everything needed for one person to conduct the Benchmark Assessment for 25 students and the Progress Monitoring Assessment for up to five students. These easy-to-implement kits simplify the distribution and organization of DIBELS Next materials.

DMG: Materials may be downloaded at no cost from DMG at http://dibels.org/next. Minimal reproduction costs associated with printing.

Testers will require 4-8 hours of training. Examiners must at a minimum be a paraprofessional.

Training manuals and materials are field tested and are included in the cost of the tool.

Amplify’s Customer Care Center offers complete user-level support from 7:00 a.m. to 7:00 p.m. EST, Monday through Friday. Customers may contact a customer support representative via telephone, e-mail, or electronically through the mCLASS website. Additionally, customers have self-service access to instructions, documents, and frequently asked questions on our Website.  The research staff and product teams are available to answer questions about the content within the assessments.

Accommodations:

DIBELS Next is an assessment instrument well-suited for use with capturing the developing reading skills of special education students learning to read, with a few exceptions: a) students who are deaf; b) students who have fluency-based speech disabilities, e.g., stuttering, oral apraxia; c) students who are learning to read in a language other than English or Spanish; d) students with severe disabilities.  Use of DIBELS Next is appropriate for all other students, including those in special education for whom reading connected text is an IEP goal. For students receiving special education, it may be necessary to adjust goals and timelines. Approved accommodations are available in the administration manual.

Where to obtain:

Amplify Education, Inc.
55 Washington Street, Suite 900
Brooklyn, NY 11201
1-800-823-1969, option 1
www.amplify.com

Sopris Learning.
17855 Dallas Parkway, Suite 400, Dallas, TX 75287-6816
http://www.soprislearning.com

DMG
859 Willamette Street, Suite 320, Eugene, OR 97401
541-431-6931
(888) 399-1995
http://dibels.org

DIBELS Next measures are brief, powerful indicators of foundational early literacy skills that: are quick to administer and score; serve as universal screening (or benchmark assessment) and progress monitoring; identify students in need of intervention support; evaluate the effectiveness of interventions; and support the RtI/Multi-tiered model. DIBELS Next comprises six measures: First Sound Fluency (FSF), Letter Naming Fluency (LNF), Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF), DIBELS Oral Reading Fluency (DORF), and Daze. 

FSF is a brief, direct measure of a student’s fluency in identifying the initial sounds in words.

Administration of the test takes 1-5 minutes and should be administered in an individual setting. DMG requires an additional 1-2 minutes for scoring.

There are 20 alternate forms per measure.

Raw scores and developmental benchmarks are available. Assessor says a series of words one at a time to the student and asks the student to say the first sound in the words. 

Score:  Number of points for correct responses in 1 minute.  2 points for each correct initial phoneme and 1 point for each correct initial consonant blend, consonant plus vowel, or consonant blend plus vowel said by the student in 1 minute. Raw scores, cut points, and benchmark goals are all grade-specific but are not strictly based on grade norms.

 

Reliability of the Performance Level Score: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Three-Form Alternate-Form

 

K

97

 

0.94

3.15

Participants were students from five schools in one school district.

Three-form reliability estimates are provided to correspond to the recommended DIBELS practice of examining a pattern of performance on repeated assessments for increased confidence in decisions. The reliability of three-form aggregates is estimated using the Spearman-Brown Prophecy Formula.

 

Reliability of the Slope: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

HLM

K

8,682

 

0.72

 

Reliability of slope was computed using data from school year 2011-2012. 9% African American, 13% Hispanic; 21% subsidized lunch; 6% special education; 7% English as second language. Weekly assessments over 12 months (i.e., 6-29 assessments; mean=8.74).

 

Validity of the Performance Level Score: Convincing Evidence

Type of
Validity

Age or
Grade

Test or
Criterion

N
(range)

Coefficient

Information (including normative data) / Subjects

range

median

Predictive

K

GRADE
Total Test

166

 

0.52

Participants were a stratified random sample drawn from thirteen schools across five states based on beginning of year DIBELS

 

Predictive Validity of the Slope of Improvement: Data Unavailable

Disaggregated Reliability and Validity Data: Unconvincing Evidence

Disaggregated Validity of the Reliability of the Slope

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

HLM (Caucasian)

K

3,306

 

0.72

 

Reliability of slope was computed using data from school year 2011-2012. 21% subsidized lunch; 6% special education; 7% English as second language. Weekly assessments over 12 months (i.e., 6-29 assessments; mean=8.74).

HLM (African American)

K

788

 

0.74

 

HLM (Hispanic)

K

1,134

 

0.75

 

 

Alternate Forms: Partially Convincing Evidence

1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

Items for all FSF forms were selected from a word pool consisting of single-syllable words. Initial work on this word pool was derived from a study of preschool measures of early literacy (Kaminski, Baker, Chard, Clarke, & Smith, 2006). Words were excluded if they were deemed inappropriate (e.g., rob, knife) or if they began with the initial phonemes /b/, /d/, /p/, or /g/ followed by the /u/ sound (e.g. duck), as such words cannot be scored differentially due to confusion with the schwa sound. The final word pool consisted of 861 words, 3 of which were used as example items and so do not appear as test items. The words were then broken into three difficulty levels.

Difficulty Category

Number and Percent
of Items per Form

Total Items in
Word Pool

Initial continuous sound (e.g., /s/, /m/) followed by a vowel sound

23%, 7 items per form

234

Initial stop sound (e.g., /b/, /t/) followed by a vowel sound

27%, 8 items per form

265

Initial blend (e.g., /st/)

50%, 15 items per form

362

Each form consists of 30 items. Before creating the individual forms, a stratified sequence of the different difficulty categories was developed. Of the 30 items in the sequence, the first 28 items were divided into 7 groups of 4. Each group of 4 included one word with an initial continuous sound, one word with an initial stop sound, and two words with an initial blend. Within the groups of 4, the order of the categories was randomized, except for the first group, which started with an initial continuous sound, then an initial stop sound, then two words with blends. The 29th category in the sequence was a word with an initial stop sound, and the 30th category was a word that started with a blend. Once the sequence was determined, that stratification was applied to all forms, so that the same difficulty categories appear in the same locations on every form. Each word on a form was then randomly selected from the words that matched the specified difficulty category

During the 2006-2007 academic year, data on six alternate-forms of FSF was collected from approximately 1,000 kindergarten students. Students were given FSF forms at regular benchmark intervals, beginning-of-year (September/October) and middle-of-year (January/February), and 3 progress monitoring forms during each of the three months between. Of the 17 schools that participated in the study, 50% of the students were female, 90% White, 8% Latino/a, 1% African American, with a 45% Free/Reduced Lunch rate.

Type of Reliability

Grade

N

Coefficient

Range

Median

One-month alternate-form

K

355 - 994

0.71 – 0.82

0.78

Cummings, K. D., Kaminski, R. A., Good, R. H., & O’Neil, M. E. (2011). Assessing phonemic awareness in preschool and kindergarten: Development and initial validation of First Sound Fluency. Assessment for Effective Intervention, 36(2), 94-106. 

2. Number of alternate forms of equal and controlled difficulty:

20 alternate forms.

Sensitive to Student Improvement: Convincing Evidence

1. Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average)

Slopes on the progress-monitoring tool are significantly greater than zero; the slopes are significantly different for special-education vs. non-special-education students. 

Grade

All Sample

Special Ed

Non Special Ed

n

Slope

SE

n

Slope

SE

n

Slope

SE

K

8,286

4.66

0.02

478

3.80

0.11

3,452

4.73

0.04

 

End-of-Year Benchmarks: Convincing Evidence

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

a. Specify the end-of-year performance standards:

Three end-of-year performance standards are specified: Well Below Benchmark, Below Benchmark, and At or Above Benchmark. These standards are used to indicate increasing odds of achieving At or Above Benchmark at the next benchmark administration period.

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced.

The DIBELS Next benchmark goals provide targeted levels of skill that students need to achieve by specific times to be considered to be making adequate progress. In developing benchmark goals, our focus was on general adequate reading skills, and is not specific to a particular state assessment, published reading test, or national assessment. A student with adequate reading skills should read adequately regardless of the specific assessment that is used. In the 2007 National Assessment of Educational Progress, 34% of students who scored below the level of reading skills were judged to be Basic, and 68% of students who scored below the level were judged to be Proficient. According to the NAEP, “Basic denotes partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at a given grade (Daane et al., 2005, p. 18).” Thus, students who score at the 40th percentile or above on a high-quality, nationally norm-referenced test are likely to be rated Basic or above on the NAEP and can be considered to have adequate reading skills.

DIBELS Next benchmark goals are empirically derived, criterion-referenced target scores that represent adequate reading progress. The cut-points for risk indicate a level of skill below which the student is unlikely to achieve a subsequent reading goal without receiving additional, targeted instructional support.

Daane, M.C., Campbell, J.R., Grigg, W.S., Goodman, M.J., & Oranje, A. (2005). Fourth-Grade Students Reading Aloud: NAEP 2002 Special Study of Oral Reading (NCES 2006–469). U.S. Department of Education. Institute of Education Sciences, National Center for Education Statistics. Washington, DC: Government Printing Office. Available http://nces.ed.gov/nationsreportcard/pdf/studies/2006469.pdf. Accessed 6/22/2010.

c. Specify the benchmarks:

Grade

Score Level

Likely Need for Support

Beginning of Year

Middle of Year

End of Year

Kindergarten

At or Above Benchmark

Likely to Need Core Support

10+

30+

NA

Below Benchmark

Likely to Need Strategic Support

5-9

20-29

NA

Well Below Benchmark

Likely to Need Intensive Support

0-4

0-19

NA

The benchmark goal is the number provided in the At or Above Benchmark row. The cut point for risk is the first number provided in the Below Benchmark row.

 

d. Basis for specifying these benchmarks?

Criterion-referenced.

In our benchmark goal study, we used the 40th percentile or above on the Group Reading Assessment and Diagnostic Evaluation (GRADE, Williams, 2001) as an indicator that the student was making adequate progress in acquisition of important early reading and/or reading skills.

For more information about the DIBELS Next Benchmark Goals, see Chapter 4 of the DIBELS Next Technical Manual.

Good, R. H.; Kaminski, Ruth; Dewey, Elizabeth; Walin, Joshua; Powell-Smith, Kelly; Latimer, Rachael (2013). DIBELS Next Technical Manual, Eugene, OR; Dynamic Measurement Group, Inc.

Williams, K. T. (2001). Group Reading and Diagnostic Evaluation (GRADE). New York: Pearson.

Procedure for specifying benchmarks for end-of-year performance levels: The principle vision for DIBELS is a step-by-step vision. Student skills at or above benchmark at the beginning of the year put the odds in favor of the student achieving the middle-of-year benchmark goal. In turn, students with skills at or above benchmark in the middle of the year have the odds in favor of achieving the end-of-year benchmark goal. Finally, students with skills at or above benchmark at the end of the year have the odds in favor of adequate reading skills on a wide, general variety of external measures of reading proficiency.  Our fundamental logic for developing the benchmark goals and cut points for risk was to begin with the external outcome goal and work backward in that step- by-step system. We first obtained an external criterion measure (the GRADE Total Test Raw Score) at the end of the year with a level of performance that would represent adequate reading skills (the 40th percentile). Next we specified the benchmark goal and cut point for risk on the end-of-year DIBELS Composite Score with respect to the end-of-year external criterion. Then, using the DIBELS Composite end-of-year goal as an internal criterion, we established the benchmark goals and cut points for risk on middle-of-year FSF. Finally, we established the benchmark goals and cut points for risk on beginning-of-year FSF using middle-of-year FSF as an internal criterion. 

Rates of Improvement Specified: Convincing Evidence

1. Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials?

Yes.

a. Specify the growth standards: Using DIBELS Pathways of Progress™, the growth standards depend on the student’s beginning-of-year performance relative to students with similar levels of initial skills, i.e., student performance is only compared to other students who have the same beginning-of-year score. Student scores above the 80th percentile are considered Well Above Typical progress. Student scores between the 60th to 79th percentiles are considered Above Typical progress. Student scores between the 40th to 59th percentiles are considered Typical progress. And student scores between the 20th to 39th percentiles are considered Below Typical progress. Student scores below the20th percentile are considered Well Below Typical progress.

 

b. Basis for specifying minimum acceptable growth: 

DIBELS Pathways of Progress, DIBELS Next Survey®, and best practice procedures based on widely accepted research are suggested for setting attainable, ambitious, and meaningful progress monitoring goals. 

The basis for specifying minimum acceptable growth is based on DIBELS Pathways of Progress, a research- based tool for (a) establishing individual student progress monitoring goals, (b) evaluating individual student progress, and (c) evaluating the effectiveness of support at the classroom, school, or district level. DIBELS Pathways of Progress are based on student progress percentiles.

By observing a student’s current skills and later benchmark goals, we are able to set meaningful goals for the student that will either achieve or increase the odds of achieving subsequent goals. Pathways of Progress emphasizes the end point of the pathway and provides a normative framework for comparison in setting goals and evaluating individual student progress. Student progress is evaluated relative to the student’s peers, that is, growth is compared to students with similar initial skills at the same grade level on the same material. Progress that is typical or above typical is considered attainable progress. Progress that is above typical or well-above typical can be considered ambitious progress.

Pathways of Progress became available in the Spring of 2013 to all customers who use DIBELSnet®  (https://dibels.net) data reporting service for DIBELS Next data and is being implemented in partner data management systems, mCLASS and VPORT.

More information about DIBELS Pathways of Progress and illustrated examples of how the tool is used can be found in Good, Powell-Smith, and Dewey (2013).

Good, R. H., Powell-Smith, K. A., Dewey, E. N. (2013). DIBELS Pathways of Progress: Setting Ambitious, Meaningful, and Attainable Goals in Grade Level Material. Presented at the Pacific Coast Research Conference. San Diego: CA. Available at http://dibels.org/papers/Pathways_Handouts_PCRC2013.pdf.

 

c. Procedure for specifying criterion for adequate growth:

DIBELS Next Pathways of Progress is available to users of DMG’s data reporting service, DIBELSnet®, and is being implemented in partner data management systems, mCLASS and VPORT.

To derive the pathways, data on approximately 163,000 students in kindergarten through sixth grade from 502 schools within 164 school districts from across the United States was exported from users who entered their data into Dynamic Measurement Group’s data system, DIBELSnet. The sample was approximately 60% white, 23% Hispanic, and 7% Black with a free-reduced lunch rate of 35%. Of this larger sample, approximately 30,000 students from kindergarten recorded scores on the FSF measure.

DIBELS Next Pathways of Progress is derived from a multi-step process.

1.     For Kindergarten, students are grouped together by their beginning-of-year DIBELS Composite Score.

2.     For each beginning-of-year DIBELS Composite Score, the 80th, 60th, 40th, and 20th quantiles for middle-of-year FSF were calculated.

3.     Using spline regression, we created a series of prediction expressions that modeled each middle-of-year FSF quantile based on the beginning-of-year DIBELS Composite Score.

4.     The spline prediction expressions represent the four different outcome levels (one for each quantile), and each outcome level represents the end-point of a pathway of progress border.

Within these borders, we define the rates of progress as follows:

Quantile Range

Definition of Rate of Progress

Above 79%

Well-Above Typical Progress

60% - 79%

Above Typical Progress

40% - 59%

Typical Progress

20% - 39%

Below Typical Progress

Below 20%

Well-Below Typical Progress

For those who do not use DIBELSnet, progress monitoring goals can still be made. A reasonable goal to use is the next middle-of-year FSF benchmark goal, considering the amount of time the student would likely need, the logistics of who will teach using what instructional materials, when and where, and finally a measurement plan to evaluate progress.

Teachers and administrators can set progress monitoring goals for out-of-grade progress monitoring using DIBELS Next Survey® (Powell-Smith, Kaminski, & Good, 2011).

1.     Determine a student’s current level of performance using DIBELS Next Survey.

2.     Determine the goal based on the progress monitoring level and the end-of-year benchmark goals for that level.

3.     Set the goal date so that the goal is achieved in half the time in which it would typically be achieved (e.g., for a first-grade student whose beginning-of-year level of performance on other measures such as NWF CLS or PSF is closer to kindergarten, back-testing with the FSF measure and using the kindergarten middle-of-year benchmark goal to be achieved by the middle-of-year first-grade benchmark time.

4.     Draw an aimline connecting the current performance to the goal.

For those who do not use DIBELS Next Survey, out-of-grade level progress monitoring goals still need to be examined individually. In this case, adequate progress is defined as moving at least two recommendation levels (e.g., intensive to benchmark) within the time frame (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation.  In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Powell-Smith, K. A., Kaminski, R. A., & Good, R. H. (2011). DIBELS Survey Beta (Technical Report 8). Eugene, OR: Dynamic Measurement Group. Available: at https://dibels.org/papers/SurveyBetaTechReport.pdf

Decision Rules for Changing Instruction: Unconvincing Evidence

Specification of validated decision rules for when changes to instruction need to be made: We recommend using a goal-oriented rule for evaluating a student’s response to intervention that is straightforward for teachers to understand and use. Decisions about a student’s progress are based on comparisons of DIBELS scores that are plotted on a graph and the aimline, or expected rate of progress. We suggest that educational professionals consider instructional modifications when student performance falls below the aimline for three consecutive points (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Evidentiary basis for these decision rules:This recommended decision rule is based on early work with CBM (Fuchs, 1988, 1989) and precision teaching (White & Haring, 1980) and allows for a minimum of three data points to be gathered before any decision is made. As when validating a student’s need for support, a pattern of performance is considered before making individual student decisions (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation.  In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Education Research, 81, 294-304. 

Fuchs, L. S. (1989). Evaluation solutions: Monitoring progress and vising intervention plans. In M. Shinn (Ed.), Curriculum-based management: Assessing special children.  New York: Guilford Press.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd Ed.) Columbus, OH: Merrill.

 

Decision Rules for Increasing Goals: Unconvincing Evidence

Specification of validated decision rules for when increases in goals need to be made: : In general, it is recommended that support be continued until a student achieves at least three points at or above the goal. If a decision is made to discontinue support, it is recommended that progress monitoring be continued weekly for at least 1 month to ensure that the student is able to maintain growth without the supplemental support. The frequency of progress monitoring will be faded gradually as the child’s progress continues to be sufficient (Kaminski, Cummings, Powell-Smith, and Good, 2008)

Evidentiary basis for these decision rules: This recommended decision rule is based on early work with CBM (Fuchs, 1988, 1989) and precision teaching (White & Haring, 1980) and allows for a minimum of three data points to be gathered before any decision is made. As when validating a student’s need for support, a pattern of performance is considered before making individual student decisions (Kaminski, Cummings, Powell-Smith, and Good, 2008).

Kaminski, R. A., Cummings, K. Powell-Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation.  In Thomas, A. and Grimes, J. Best Practices in School Psychology V (Vol 4, pp. 1181-1204). Bethesda, MD: NASP Publications.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Education Research, 81, 294-304. 

Fuchs, L. S. (1989). Evaluation solutions: Monitoring progress and vising intervention plans. In M. Shinn (Ed.), Curriculum-based management: Assessing special children.  New York: Guilford Press.

White, O. R., & Haring, N. G. (1980). Exceptional teaching (2nd Ed.) Columbus, OH: Merrill.

Improved Student Achievement: Data Unavailable

Improved Teacher Planning Data Unavailable