FAST CBMReading

Reading

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

The Formative Assessment System for Teachers (FAST) is a cloud-based suite of assessment and reporting tools that includes CBMReading English. As of 2013-14, there is a $5 per student per year charge for the system. As a cloud-based assessment suite, there are no hardware costs or fees for additional materials.

Computer and internet access is required for full use.

Testers will require less than 1 hour of training.

Paraprofessionals can administer the test.

FastBridge Learning
520 Nicollet Mall
Suite 910
Minneapolis, MN 55402-1057
Phone: 612-254-2534

Field tested training manuals are included and should provide all implementation information.

Access to interactive online self-guided teacher training is included at no additional cost. In-person training is available at an additional cost of $300 per hour.

CBMReading English is used to monitor student progress in reading achievement in the primary grades (1-6). The automated output of each assessment gives information on the accuracy and fluency of passage reading which can be used to determine instructional level to inform intervention.

To administer the measure, an examiner listens to the child read a set of short passages aloud. Each passage is read for one minute while the examiner uses the software to mark omissions, insertions, substitutions, hesitations, and mispronunciations as errors. The number of words read correctly per minute (WRCM) is then scored using the online application. 

Administration takes approximately 1-5 minutes per student, depending on the number of passages administered. Additional scoring time required is less than 1 minute.

Forms correspond to student-ability level rather than grade. All forms are divided into Levels A, B and C, which correspond to 1st grade, 2nd and 3rd grade, and 4th to 6th grade reading levels, respectively. There are 39 Level A passages, 60 Level B, and 60 Level C passages.

Raw scores are calculated by first counting how many words were read in one minute, and then subtracting the number of errors from that total. The result is the number of words read correctly per minute. When bundles are administered, the median of three passages is used for future analysis. All of this is done automatically by the FAST. The FAST also produces a vertically (between passage sets) and horizontally (within passage sets) equated score for individual passage scores and bundle scores.

 

Reliability of the Performance Level Score

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

 

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Alternate- Form Correlation – Individual Passages (Average Fisher z-transformed Inter-Passage Correlations)

Passage Level A (Grade 1)

Grade 1- 206

Grade 2 – 21

Grade 3 – 4

Total - 231

# passages 39

# Weeks < 2

0.62 - 0.86

0.74

5.40

Information gathered from 3 states (MN, NY, & GA)

Alternate- Form Correlation – Individual Passages (Average Fisher z-transformed Inter-Passage Correlations)

Passage Level B (Grades 2 & 3)

Grade 1 – 138

Grade 2 – 179

Grade 3 – 126

Grade 4 – 32

Grade 5 – 13

Total - 488

# passages 60

# Weeks < 2

0.65 - 0.82

0.75

8.54

Information gathered from 3 states (MN, NY, & GA)

Alternate- Form Correlation – Individual Passages (Average Fisher z-transformed Inter-Passage Correlations)

Passage Level C

(Grades 4-6)

Grade 1 – 3

Grade 2- 135

Grade 3 – 79

Grade 4 – 156

Grade 5 – 140

Total – 513

# passages 60

# Weeks < 2

0.78 - 0.88

0.83

10.41

Information gathered from 3 states (MN, NY, & GA)

Alternate- Form Correlation – Passage Bundles (Average Fisher z-transformed Inter-Passage Correlations)

Passage Level A (Grade 1)

Grade 1- 206

Grade 2 – 21

Grade 3 – 4

Total - 231

# passages 39

# Weeks < 2

0.89 - 0.94

0.92

3.03

Information gathered from 3 states (MN, NY, & GA)

Alternate- Form Correlation – Passage Bundles (Average Fisher z-transformed Inter-Passage Correlations)

Passage Level B (Grades 2 & 3)

Grade 1 – 138

Grade 2 – 179

Grade 3 – 126

Grade 4 – 32

Grade 5 – 13

Total - 488

# passages 60

# Weeks  <2

0.87 - 0.92

0.90

4.97

Information gathered from 3 states (MN, NY, & GA)

Alternate- Form Correlation –  Passage Bundles (Average Fisher z-transformed Inter-Passage Correlations)

Passage Level C

(Grades 4-6)

Grade 1 – 3

Grade 2- 135

Grade 3 – 79

Grade 4 – 156

Grade 5 – 140

Total – 513

# passages 60

# Weeks < 2

0.92 - 0.95

0.94

7.06

Information gathered from 3 states (MN, NY, & GA)

Item-Total Correlation

Passage Level A (Grade 1)

Grade 1- 206

Grade 2 – 21

Grade 3 – 4

Total - 231

# passages 60

# Weeks < 2

0.91 - 0.92

0.92

 

Information gathered from 3 states (MN, NY, & GA)

Item-Total Correlation

Passage Level B (Grades 2 & 3)

Grade 1 – 138

Grade 2 – 179

Grade 3 – 126

Grade 4 – 32

Grade 5 – 13

Total - 488

# passages 60

# Weeks < 2

0.89 - 0.91

0.90

 

Information gathered from 3 states (MN, NY, & GA)

Item-Total Correlation

Passage Level C

(Grades 4-6)

Grade 1 – 3

Grade 2- 135

Grade 3 – 79

Grade 4 – 156

Grade 5 – 140

Total - 513

# passages 60

# Weeks < 2

0.88 - 0.93

0.91

 

Information gathered from 3 states (MN, NY, & GA)

Test-Retest (Fall to Winter)

Grade 1 - 386

# passages 3 & 3

# Weeks between ~12

 

0.90

 

Information gathered from 2 states (MN & GA) – Addition of data from NY led to highly similar results

Test-Retest (Fall to Winter)

Grade 2 - 369

# passages 3 & 3

# Weeks between ~12

 

0.92

 

Information gathered from 2 states (MN & GA)- Addition of data from NY led to highly similar results

Test-Retest (Fall to Winter)

Grade 3 - 394

# passages 3 & 3

# Weeks between ~ 12

 

0.92

 

Information gathered from 2 states (MN & GA) – Addition of data from NY led to highly similar results

Test-Retest (Fall to Winter)

Grade 4 – 396

# passages 3 & 3

# Weeks between ~ 12

 

0.89

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Winter)

Grade 5 - 395

# passages 3 & 3

# Weeks between ~ 12

 

0.94

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Winter)

Grade 6 - 220

# passages 3 & 3

# Weeks between ~ 12

 

0.94

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Spring)

Grade 1 – 384

# passages 3 & 3

# Weeks between ~ 32

 

0.82

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Spring)

Grade 2 - 367

# passages 3 & 3

# Weeks between ~ 32

 

0.89

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Spring)

Grade 3 – 383

# passages 3 & 3

# Weeks between ~ 32

 

0.90

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Spring)

Grade 4 – 388

# passages 3 & 3

# Weeks between ~ 32

 

0.93

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Spring)

Grade 5 - 388

# passages 3 & 3

# Weeks between ~ 32

 

0.93

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Test-Retest (Fall to Spring)

Grade 6 - 436

# passages 3 & 3

# Weeks between ~ 32

 

0.92

 

Information gathered from 2 states (MN & GA) -Addition of data from NY led to highly similar results

Inter-Rater

Passage Level A 146 Students

NA

0.83 – 1.00

0.97

 

Information based on all three sites (MN, NY, & GA)

Inter-Rater

Passage Level B 1391 Students

NA

0.93 – 0.97

0.97

 

Information based on all three sites (MN, NY, & GA)

Inter-Rater

Passage Level C 1345 Students

NA

0.83 – 1.00

0.98

 

Information based on all three sites (MN, NY, & GA)

 

Reliability of the Slope

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Split-Half Reliability

1

968

     

0.7

0.3

Students had, on average, approximately 15 observations/progress monitoring instances during the 2014-2015 school year.

Split-Half Reliability

2

2245

 

0.77

0.22

Students had, on average, approximately 19 observations/progress monitoring instances during the 2014-2015 school year.

Split-Half Reliability

3

1996

 

0.64

0.27

Students had, on average, approximately 19 observations/progress monitoring instances during the 2014-2015 school year.

Split-Half Reliability

4

1654

 

0.04

0.28

Students had, on average, approximately 18 observations/progress monitoring instances during the 2014-2015 school year.

Split-Half Reliability

5

1023

 

0.63

0.3

Students had, on average, approximately 18 observations/progress monitoring instances during the 2014-2015 school year.

Split-Half Reliability

6

496

 

0.44

0.28

Students had, on average, approximately 19 observations/progress monitoring instances during the 2014-2015 school year.

Reliability of the Slope

1

75

 

0.79

 

Students had up to 20 observations of progress monitoring data during several months of the 2014-2015 school year.

Reliability of the Slope

2

87

 

0.83

 

Students had up to 20 observations of progress monitoring data during several months of the 2014-2015 school year.

Reliability of the Slope

3

85

 

0.70

 

Students had up to 20 observations of progress monitoring data during several months of the 2014-2015 school year.

Reliability of the Slope

4

93

 

0.72

 

Students had up to 20 observations of progress monitoring data (Average = 12 observations; maximum of 33 observations) during several months (Average = 4.54 months; maximum of 10.17 months) of the 2014-2015 school year. 

Reliability of the Slope

5

74

 

0.63

 

Students had up to 20 observations of progress monitoring data during several months of the 2014-2015 school year.

Reliability of the Slope

6

74

 

0.63

 

Students had up to 20 observations of progress monitoring data (Average = 11.42 observations; maximum of 34 observations) during several months (Average = 4.31 months; maximum of 10.23 months) of the 2014-2015 school year.

Reliability of the Slope

1

48

 

0.95

 

Students had up to 30 observations of progress monitoring data during several months of the 2014-2015 school year.

Reliability of the Slope

2

58

 

0.79

 

Students had up to 30 observations of progress monitoring data during several months of the 2014-2015 school year.

Reliability of the Slope

3

64

 

0.62

 

Students had up to 30 observations of progress monitoring data during several months of the 2014-2015 school year.

 

Validity of the Performance Level Score

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Concurrent

Grade 1

171 Students

 (TOSREC)

 

 

0.89

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 2

206 Students

 (TOSREC)

 

 

0.83

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 3

188 Students

 (TOSREC)

 

 

0.76

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 4

181 Students

 (TOSREC)

 

 

0.74

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 5

202 Students

 (TOSREC)

 

 

0.76

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 6

205 Students

 (TOSREC)

 

 

0.80

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 1

AIMSWEB

208 Students

~ 12 Weeks

 

0.91

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 2

AIMSWEB

230 Students

~ 12 Weeks

 

0.92

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 3

AIMSWEB

220 Students

~ 12 Weeks

 

0.90

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 4

AIMSWEB

242 Students

~ 12 Weeks

 

0.92

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 5

AIMSWEB

223 Students

~ 12 Weeks

 

0.92

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 6

AIMSWEB

220 Students

~ 12 Weeks

 

0.94

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 1

AIMSWEB

215 Students

-

 

0.97

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 2

AIMSWEB

245 Students

-

 

0.97

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 3

AIMSWEB

245 Students

-

 

0.95

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 4

AIMSWEB

247 Students

-

 

0.97

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 5

AIMSWEB

224 Students

-

 

0.96

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 6

AIMSWEB 

220 Students

-

 

0.95

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Predictive

Grade 1

DIBELS Next

201 Students

~ 28 Weeks

 

0.80

Results based on MN sample, highly similar results found with inclusion of GA & NY datasets

Concurrent

Grade 1

DIBELS Next

72 Students

-

 

0.78

 Calculated from MN, GA, & NY

Concurrent

Grade 2

DIBELS Next

437 Students

-

 

0.94

Calculated from MN, GA, & NY

Concurrent

Grade 3

DIBELS Next

452 Students

-

 

0.96

Calculated from MN, GA, & NY

Concurrent

Grade 4

DIBELS Next

484 Students

-

 

0.95

Calculated from MN, GA, & NY

Concurrent

Grade 5

DIBELS Next

486 Students

-

 

0.96

Calculated from MN, GA, & NY

Concurrent

Grade 6

DIBELS Next

211 Students

-

 

0.95

Calculated from MN, GA, & NY

 

Predictive Validity of the Slope of Improvement

Grade123456
Ratingdashdashdashdashdashdash

Bias Analysis Conducted

Grade123456
RatingNoNoNoNoNoNo

Disaggregated Reliability and Validity Data

Grade123456
RatingYesYesYesYesYesYes

Disaggregated Reliability for the Performance Level Score

The following disaggregated test retest reliability coefficients were derived from a sample of First (N = 6208), Second (N = 7697), Third (N = 7064), Fourth (N = 6085), Fifth (N = 5687), and Sixth (N = 1292) grade students (Total N = 34034). The sample consisted of approximately 48.1% female and 51.9% male participants. Approximately 78% of the sample of students was not eligible for special education services. Approximately 8.7% of students were eligible for special education services. Approximately 13.1% of students did not report their special education status. The sample consisted of approximately 61.3% White, 10.3% African American, 6.4% Hispanic, 5.6% Asian, 1.3% American Indian or Alaska Native, 3.4% Multiracial, 0.2% Native Hawaiian or Other Pacific Islander, and 11.4% of students were reported as “other.”

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

Range

Median

Delayed Test Retest

2

68

 

0.95

 

Winter to Spring; American Indian or Alaska Native

Delayed Test Retest

2

7

 

0.97

 

2-3 Week Delay; American Indian or Alaska Native

Delayed Test Retest

3

53

 

0.91

 

Fall to Winter; American Indian or Alaska Native

Delayed Test Retest

3

47

 

0.92

 

Fall to Spring; American Indian or Alaska Native

Delayed Test Retest

3

54

 

0.95

 

Winter to Spring; American Indian or Alaska Native

Delayed Test Retest

4

44

 

0.94

 

Fall to Winter; American Indian or Alaska Native

Delayed Test Retest

4

41

 

0.92

 

Fall to Spring; American Indian or Alaska Native

Delayed Test Retest

4

41

 

0.96

 

Winter to Spring; American Indian or Alaska Native

Delayed Test Retest

5

45

 

0.93

 

Fall to Winter; American Indian or Alaska Native

Delayed Test Retest

5

55

 

0.94

 

Winter to Spring; American Indian or Alaska Native

Delayed Test Retest

5

47

 

0.89

 

Fall to Spring; American Indian or Alaska Native

Delayed Test Retest

6

10

 

0.96

 

Fall to Winter; American Indian or Alaska Native

Delayed Test Retest

6

9

 

0.96

 

Winter to Spring; American Indian or Alaska Native

Delayed Test Retest

6

8

 

0.96

 

Fall to Spring; American Indian or Alaska Native

Delayed Test Retest

1

69

 

0.90

 

Fall to Winter; Asian

Delayed Test Retest

1

292

 

0.94

 

Winter to Spring; Asian

Delayed Test Retest

1

69

 

0.92

 

Fall to Spring; Asian

Delayed Test Retest

1

112

 

0.92

 

2-3 Week delay; Asian

Delayed Test Retest

2

297

 

0.95

 

Fall to Winter; Asian

Delayed Test Retest

2

282

 

0.96

 

Winter to Spring; Asian

Delayed Test Retest

2

278

 

0.91

 

Fall to Spring; Asian

Delayed Test Retest

2

73

 

0.92

 

2-3 Week delay; Asian

Delayed Test Retest

3

310

 

0.93

 

Fall to Winter; Asian

Delayed Test Retest

3

290

 

0.94

 

Winter to Spring; Asian

Delayed Test Retest

3

280

 

0.90

 

Fall to Spring; Asian

Delayed Test Retest

3

75

 

0.92

 

2-3 week delay; Asian

Delayed Test Retest

4

272

 

0.93

 

Fall to Winter; Asian

Delayed Test Retest

4

232

 

0.94

 

Winter to Spring; Asian

Delayed Test Retest

4

237

 

0.92

 

Fall to Spring; Asian

Delayed Test Retest

4

82

 

0.91

 

2-3 Week delay; Asian

Delayed Test Retest

5

254

 

0.95

 

Fall to Winter; Asian

Delayed Test Retest

5

235

 

0.95

 

Winter to Spring; Asian

Delayed Test Retest

5

240

 

0.93

 

Fall to Spring; Asian

Delayed Test Retest

5

20

 

0.95

 

2-3 Week delay; Asian

Delayed Test Retest

6

129

 

0.97

 

Fall to Winter; Asian

Delayed Test Retest

6

117

 

0.96

 

Winter to Spring; Asian

Delayed Test Retest

6

114

 

0.95

 

Fall to Spring; Asian

Delayed Test Retest

6

29

 

0.97

 

2-3 Week delay; Asian

Delayed Test Retest

1

203

 

0.87

 

Fall to Winter; African American

Delayed Test Retest

1

674

 

0.92

 

Winter to Spring; African American

Delayed Test Retest

1

206

 

0.92

 

Fall to Spring; African American

Delayed Test Retest

1

353

 

0.93

 

2-3 Week Delay; African American

Delayed Test Retest

2

526

 

0.92

 

Fall to Winter; African American

Delayed Test Retest

2

501

 

0.94

 

Winter to Spring; African American

Delayed Test Retest

2

473

 

0.86

 

Fall to Spring; African American

Delayed Test Retest

2

220

 

0.92

 

2-3 Week delay; African American

Delayed Test Retest

3

511

 

0.90

 

Fall to Winter; African American

Delayed Test Retest

3

473

 

0.94

 

Winter to Spring; African American

Delayed Test Retest

3

459

 

0.87

 

Fall to Spring; African American

Delayed Test Retest

3

188

 

0.90

 

2-3 Week delay; African American

Delayed Test Retest

4

507

 

0.92

 

Fall to Winter; African American

Delayed Test Retest

4

470

 

0.94

 

Winter to Spring; African American

Delayed Test Retest

4

454

 

0.90

 

Fall to Spring; African American

Delayed Test Retest

4

192

 

0.89

 

2-3 Week Delay; African American

Delayed Test Retest

5

468

 

0.93

 

Fall to Winter; African American

Delayed Test Retest

5

448

 

0.92

 

Winter to Spring; African American

Delayed Test Retest

5

432

 

0.88

 

Fall to Spring; African American

Delayed Test Retest

5

54

 

0.89

 

2-3 Week delay; African American

Delayed Test Retest

6

90

 

0.91

 

Fall to Winter; African American

Delayed Test Retest

6

74

 

0.90

 

Winter to Spring; African American

Delayed Test Retest

6

74

 

0.87

 

Fall to Spring; African American

Delayed Test Retest

6

13

 

0.85

 

2-3 Week delay; African American

Delayed Test Retest

1

156

 

0.87

 

Fall to Winter; Hispanic

Delayed Test Retest

1

444

 

0.88

 

Winter to Spring; Hispanic

Delayed Test Retest

1

156

 

0.94

 

Fall to Spring; Hispanic

Delayed Test Retest

1

257

 

0.89

 

2-3 week delay; Hispanic

Delayed Test Retest

2

329

 

0.92

 

Fall to Winter; Hispanic

Delayed Test Retest

2

325

 

0.93

 

Winter to Spring; Hispanic

Delayed Test Retest

2

305

 

0.85

 

Fall to Spring; Hispanic

Delayed Test Retest

2

154

 

0.91

 

2-3 Week delay; Hispanic

Delayed Test Retest

3

357

 

0.92

 

Fall to Winter; Hispanic

Delayed Test Retest

3

352

 

0.93

 

Winter to Spring; Hispanic

Delayed Test Retest

3

337

 

0.89

 

Fall to Spring; Hispanic

Delayed Test Retest

3

141

 

0.91

 

2-3 Week Delay; Hispanic

Delayed Test Retest

4

323

 

0.93

 

Fall to Winter; Hispanic

Delayed Test Retest

4

304

 

0.94

 

Winter to Spring; Hispanic

Delayed Test Retest

4

291

 

0.91

 

Fall to Spring; Hispanic

Delayed Test Retest

4

126

 

0.92

 

2-3 Week Delay; Hispanic

Delayed Test Retest

5

253

 

0.95

 

Fall to Winter; Hispanic

Delayed Test Retest

5

251

 

0.94

 

Winter to Spring; Hispanic

Delayed Test Retest

5

234

 

0.92

 

Fall to Spring; Hispanic

Delayed Test Retest

5

23

 

0.91

 

2-3 Week delay; Hispanic

Delayed Test Retest

6

73

 

0.95

 

Fall to Winter; Hispanic

Delayed Test Retest

6

62

 

0.90

 

Winter to Spring; Hispanic

Delayed Test Retest

6

61

 

0.87

 

Fall to Spring; Hispanic

Delayed Test Retest

6

7

 

0.98

 

2-3 Week Delay; Hispanic

Delayed Test Retest

1

79

 

0.87

 

Fall to Winter; Multiracial

Delayed Test Retest

1

249

 

0.91

 

Winter to Spring; Multiracial

Delayed Test Retest

1

79

 

0.95

 

Fall to Spring; Multiracial

Delayed Test Retest

1

145

 

0.93

 

2-3 Week Delay; Multiracial

Delayed Test Retest

2

196

 

0.91

 

Fall to Winter; Multiracial

Delayed Test Retest

2

191

 

0.94

 

Winter to Spring; Multiracial

Delayed Test Retest

2

192

 

0.85

 

Fall to Spring; Multiracial

Delayed Test Retest

2

82

 

0.91

 

2-3 Week Delay; Multiracial

Delayed Test Retest

3

194

 

0.91

 

Fall to Winter; Multiracial

Delayed Test Retest

3

178

 

0.91

 

Winter to Spring; Multiracial

Delayed Test Retest

3

181

 

0.89

 

Fall to Spring; Multiracial

Delayed Test Retest

3

91

 

0.90

 

2-3 Week delay; Multiracial

Delayed Test Retest

4

173

 

0.93

 

Fall to Winter; Multiracial

Delayed Test Retest

4

150

 

0.91

 

Winter to Spring; Multiracial

Delayed Test Retest

4

152

 

0.88

 

Fall to Spring; Multiracial

Delayed Test Retest

4

75

 

0.91

 

2-3 Week Delay; Multiracial

Delayed Test Retest

5

106

 

0.94

 

Fall to Winter; Multiracial

Delayed Test Retest

5

92

 

0.92

 

Winter to Spring; Multiracial

Delayed Test Retest

5

93

 

0.89

 

Fall to Spring; Multiracial

Delayed Test Retest

5

10

 

0.87

 

2-3 Week delay; Multiracial

Delayed Test Retest

6

16

 

0.96

 

Fall to Winter; Multiracial

Delayed Test Retest

6

15

 

0.95

 

Winter to Spring; Multiracial

Delayed Test Retest

6

15

 

0.93

 

Fall to Spring; Multiracial

Delayed Test Retest

1

16

 

0.95

 

Winter to Spring; Native Hawaiian or Other Pacific Islander

Delayed Test Retest

2

11

 

0.99

 

Fall to Winter; Native Hawaiian or Other Pacific Islander

Delayed Test Retest

2

13

 

0.97

 

Winter to Spring; Native Hawaiian or Other Pacific Islander

Delayed Test Retest

2

12

 

0.95

 

2-3 Week Delay; Native Hawaiian or Other Pacific Islander

Delayed Test Retest

3

8

 

0.90

 

Fall to Spring; Native Hawaiian or Other Pacific Islander

Delayed Test Retest

1

849

 

0.77

 

Fall to Winter; White

Delayed Test Retest

1

3555

 

0.89

 

Winter to Spring; White

Delayed Test Retest

1

857

 

0.83

 

Fall to Spring; White

Delayed Test Retest

1

684

 

0.90

 

2-3 Week Delay; White

Delayed Test Retest

2

2911

 

0.91

 

Fall to Winter; White

Delayed Test Retest

2

2865

 

0.94

 

Winter to Spring; White

Delayed Test Retest

2

3081

 

0.87

 

Fall to Spring; White

Delayed Test Retest

2

396

 

0.89

 

2-3 Week delay; White

Delayed Test Retest

3

3565

 

0.90

 

Fall to Winter; White

Delayed Test Retest

3

3431

 

0.92

 

Winter to Spring; White

Delayed Test Retest

3

3387

 

0.88

 

Fall to Spring; White

Delayed Test Retest

3

389

 

0.88

 

2-3 Week delay; White

Delayed Test Retest

4

2874

 

0.91

 

Fall to Winter; White

Delayed Test Retest

4

2602

 

0.92

 

Winter to Spring; White

Delayed Test Retest

4

2711

 

0.89

 

Fall to Spring; White

Delayed Test Retest

4

358

 

0.90

 

2-3 Week Delay; White

Delayed Test Retest

5

2326

 

0.92

 

Fall to Winter; White

Delayed Test Retest

5

2145

 

0.92

 

Winter to Spring; White

Delayed Test Retest

5

2346

 

0.89

 

Fall to Spring; White

Delayed Test Retest

5

45

 

0.92

 

2-3 Week delay; White

Delayed Test Retest

6

444

 

0.91

 

Fall to Winter; White

Delayed Test Retest

6

368

 

0.92

 

Winter to Spring; White

Delayed Test Retest

6

357

 

0.86

 

Fall to Spring; White

Delayed Test Retest

6

26

 

0.87

 

2-3 Week delay; White

Disaggregated Reliability for the Slope

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Multi-Level (True Slope Variance / Total Slope Variance)**

Grades 2-5

1,308 - 1,518

0.25 - 0.43

0.28

 

Caucasian Students – Urban, Suburban, and Rural MN

 

Grades 2-5

353 - 442

0.32 - 0.60

0.43

 

African American Students – Urban, Suburban, and Rural MN

 

Grades 2-5

197 - 210

0.38 - 0.52

0.40

 

Asian Students – Urban, Suburban, and Rural MN

 

Grades 2-5

247 - 314

0.21 - 0.52

0.45

 

Latino/a Students – Urban, Suburban, and Rural M

**Reliability of the slope for multi-level analyses may be biased when few observations are used to estimate slope. In this instance slopes were estimated from tri-annual assessments (only 3 observations)  - interpret with caution. (Raudenbush & Bryk, 2002). Follow up studies where more observations are collected (i.e., observations > 10 are currently underway).  

 

Alternate Forms

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

Participants and Setting

Student participants were from urban and suburban schools located in the Southeast (40%), the Upper Midwest (40%), and Northeast regions (20%) of the United States. The sample consisted of 1,267 students who were Asian (3%), Black (15%), Hispanic (19%), Native Indian (1%), and White (62%) distributed across the grade range.

Materials and Forms

Passages were developed with detailed specification to control for difficulty and consistency. Three levels of difficulty were specified. Level 1 constrained text to over-sample the first 100 sight words, highly decodable words (CVC, CVCe), and regular spelling patterns. Level 2 and Level 3 passages contained increased text complexity and word difficulty. All passages were developed with a goal-action-outcome structure.

Procedures for Field Testing

All data were collected by trained researchers in a school-based setting. Each participant read all forms within approximately 10 school days. All forms were administered to a stratified sample of students across three geographic regions. The order of administration for the forms was counterbalanced within and across days across participants.

ANOVA Tables for Passage Mean Equivalence

Tests of statistical significance indicate that there are no differences between alternate forms at any grade level.

 

Level 1 – Grade 1

Effect

SS

df

MS

F

h2

Between-form

14,726

19

775.07

23.73

0.094

Within-form

697,590

4600

151.65

 

 

Interaction (error)

142,739

4370

32.66

 

 

Total

712,316

4619

 

 

 

* p < 0.05, ** p < 0.01, *** p < 0.001

Level 2 – Grade 2

Effect

SS

df

MS

F

h2

Between-form

10,447

19

549.80

7.57

0.015

Within-form

2,851,491

9740

2292.80

 

 

Interaction (error)

671,874

9253

72.6

 

 

Total

2,861,938

9759

 

 

 

* p < 0.05, ** p < 0.01, *** p < 0.001

Level 2 – Grade 3

Effect

SS

df

MS

F

h2

Between-form

34,652

19

1823.80

24.48

0.048

Within-form

2,901,743

9740

297.90

 

 

Interaction (error)

689,412

9253

74.50

 

 

Total

2,936,395

9759

 

 

 

* p < 0.05, ** p < 0.01, *** p < 0.001

Level 3 – Grade 4

Effect

SS

df

MS

F

h2

Between-form

36,169

19

2127.60

19.18

0.036

Within-form

5,857,058

9216

635.50

 

 

Interaction (error)

965,384

8704

110.90

 

 

Total

5,893,227

9233

 

 

 

* p < 0.05, ** p < 0.01, *** p < 0.001

Level 3 – Grade 5

Effect

SS

df

MS

F

h2

Between-form

24,467

19

1439.20

13.81

0.026

Within-form

5,824,431

9216

632.00

 

 

Interaction (error)

907,141

8704

104.20

 

 

Total

5,848,898

9233

 

 

 

* p < 0.05, ** p < 0.01, *** p < 0.001

Level 3 – Grade 6

Effect

SS

df

MS

F

h2

Between-form

20,058

19

1432.70

12.56

0.024

Within-form

5,208,325

9216

678.20

 

 

Interaction (error)

817,872

8704

114.10

 

 

Total

5,228,382

9233

 

 

 

* p < 0.05, ** p < 0.01, *** p < 0.001

 

Alternate Form Descriptive Statistics

Level 1 – Grade 1

Alternate form correlations

N (forms)

Mean

SD

Min

Max

20

0.95

0.01

0.94

0.97

Note. 231 students took all forms within a 10-day period.

Descriptive Statistics by Alternate Form

Form

Number

Grade

M

SD

L1, Form 2

1

1

24.16

12.02

L1, Form 3

2

1

25.91

12.10

L1, Form 6

3

1

24.90

12.90

L1, Form 8

4

1

28.35

11.70

L1, Form 10

5

1

26.24

10.93

L1, Form 11

6

1

24.74

10.51

L1, Form 12

7

1

26.37

13.96

L1, Form 13

8

1

26.71

11.02

L1, Form 18

9

1

23.78

13.10

L1, Form 20

10

1

24.77

12.28

L1, Form 25

11

1

28.51

12.81

L1, Form 27

12

1

26.14

11.83

L1, Form 29

13

1

25.19

13.73

L1, Form 43

14

1

26.13

13.56

L1, Form 44

15

1

29.86

12.86

L1, Form 45

16

1

23.29

12.09

L1, Form 46

17

1

26.31

11.41

L1, Form 56

18

1

23.57

14.30

L1, Form 58

19

1

27.01

11.95

L1, Form 59

20

1

23.14

9.50

Note. Each of 231 students took all forms within a 10-day period.

Level 2 – Grade 2

Alternate form correlations

N (forms)

Mean

SD

Min

Max

20

0.90

0.01

0.88

0.91

Note. 488 students took all forms within a 10-day period.

Descriptive Statistics by Alternate Form

Form

Number

Grade

M

SD

L2, Form 2

1

2

52.61

17.68

L2, Form 15

2

2

52.82

18.28

L2, Form 23

3

2

54.20

17.76

L2, Form 25

4

2

53.02

18.38

L2, Form 38

5

2

52.73

16.20

L2, Form 44

6

2

55.31

14.99

L2, Form 46

7

2

54.50

18.69

L2, Form 49

8

2

55.04

17.43

L2, Form 53

9

2

55.80

15.79

L2, Form 54

10

2

52.09

17.93

L2, Form 66

11

2

53.10

17.54

L2, Form 68

12

2

54.73

16.69

L2, Form 80

13

2

54.35

16.79

L2, Form 84

14

2

54.30

15.06

L2, Form 94

15

2

52.62

17.21

L2, Form 95

16

2

55.26

16.26

L2, Form 98

17

2

54.36

18.21

L2, Form 103

18

2

54.60

18.32

L2, Form 109

19

2

53.91

15.89

L2, Form 116

20

2

53.86

16.40

Note. Each of 488 students took all forms within a 10-day period.

Level 2 – Grade 3

Alternate form correlations

N (forms)

Mean

SD

Min

Max

20

0.89

0.01

0.87

0.92

Note. 488 students took all forms within a 10-day period.

Descriptive Statistics by Alternate Form

Form

Number

Grade

M

SD

L2, Form 1

1

3

50.43

16.89

L2, Form 13

2

3

52.05

18.76

L2, Form 16

3

3

49.73

14.27

L2, Form 22

4

3

49.82

18.37

L2, Form 37

5

3

47.48

17.96

L2, Form 50

6

3

50.27

17.42

L2, Form 57

7

3

52.09

18.13

L2, Form 58

8

3

46.87

18.65

L2, Form 61

9

3

49.05

15.91

L2, Form 70

10

3

48.14

17.62

L2, Form 71

11

3

46.61

15.70

L2, Form 76

12

3

47.03

18.38

L2, Form 78

13

3

49.01

16.86

L2, Form 90

14

3

51.62

18.50

L2, Form 99

15

3

48.73

17.44

L2, Form 104

16

3

46.80

15.63

L2, Form 106

17

3

51.94

15.34

L2, Form 108

18

3

51.47

17.95

L2, Form 113

19

3

51.93

18.60

L2, Form 14

20

3

48.93

15.85

Note. Each of 488 students took all forms within a 10-day period.

Level 3 – Grade 4

Alternate form correlations

N (forms)

Mean

SD

Min

Max

20

0.87

0.01

0.85

0.89

Note. 513 students took all forms within a 10-day period.

Descriptive Statistics by Alternate Form

Form

Number

Grade

M

SD

L3, Form 17

1

4

107.48

25.07

L3, Form 19

2

4

105.48

25.71

L3, Form 21

3

4

110.91

26.21

L3, Form 22

4

4

109.50

24.56

L3, Form 31

5

4

108.75

26.64

L3, Form 37

6

4

107.76

23.96

L3, Form 43

7

4

105.70

24.30

L3, Form 47

8

4

107.36

24.72

L3, Form 58

9

4

105.99

24.22

L3, Form 62

10

4

107.72

23.81

L3, Form 75

11

4

105.64

26.06

L3, Form 78

12

4

112.32

25.82

L3, Form 84

13

4

108.17

24.31

L3, Form 105

14

4

106.32

24.35

L3, Form 109

15

4

111.45

26.82

L3, Form 115

16

4

109.56

23.63

L3, Form 116

17

4

107.65

27.74

L3, Form 117

18

4

106.90

25.38

L3, Form 39

19

4

108.00

23.00

L3, Form 83

20

4

107.60

26.00

Note. Each of 513 students took all forms within a 10-day period.

Level 3 – Grade 5

Alternate form correlations

N (forms)

Mean

SD

Min

Max

20

0.88

0.02

0.84

0.90

Note. 513 students took all forms within a 10-day period.

Descriptive Statistics by Alternate Form

Form

Number

Grade

M

SD

L3, Form 13

1

5

103.73

24.38

L3, Form 14

2

5

103.30

24.50

L3, Form 16

3

5

101.04

24.74

L3, Form 35

4

5

105.32

23.47

L3, Form 42

5

5

103.36

26.27

L3, Form 51

6

5

100.68

24.39

L3, Form 53

7

5

100.89

26.31

L3, Form 70

8

5

104.26

25.04

L3, Form 73

9

5

105.01

24.06

L3, Form 82

10

5

104.56

24.90

L3, Form 85

11

5

102.92

26.42

L3, Form 86

12

5

103.91

24.93

L3, Form 87

13

5

101.96

26.67

L3, Form 89

14

5

104.40

26.91

L3, Form 94

15

5

100.27

27.40

L3, Form 96

16

5

101.19

23.22

L3, Form 112

17

5

103.47

26.73

L3, Form 27

18

5

103.50

26.00

L3, Form 81

19

5

102.00

26.00

L3, Form 105

20

5

100.65

21.35

Note. Each of 513 students took all forms within a 10-day period.

Level 3 – Grade 6

Alternate form correlations

N (forms)

Mean

SD

Min

Max

20

0.87

0.01

0.85

0.88

Note. 513 students took all forms within a 10-day period.

Descriptive Statistics by Alternate Form

Form

Number

Grade

M

SD

L3, Form 18

1

6

99.07

26.22

L3, Form 30

2

6

98.81

26.23

L3, Form 32

3

6

94.96

26.12

L3, Form 45

4

6

95.39

25.80

L3, Form 48

5

6

95.95

26.65

L3, Form 49

6

6

97.89

25.26

L3, Form 52

7

6

97.56

26.30

L3, Form 68

8

6

99.08

25.23

L3, Form 71

9

6

95.99

25.42

L3, Form 72

10

6

98.63

25.35

L3, Form 74

11

6

95.84

26.68

L3, Form 79

12

6

96.51

27.97

L3, Form 97

13

6

93.54

23.99

L3, Form 100

14

6

96.62

27.13

L3, Form 101

15

6

95.71

26.04

L3, Form 44

16

6

97.00

26.00

L3, Form 110

17

6

96.00

26.00

L3, Form 7

18

6

97.80

26.00

L3, Form 63

19

6

96.00

25.00

L3, Form 12

20

6

98.00

26.00

Note. Each of 513 students took all forms within a 10-day period.

Rates of Improvement Specified

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials?

Yes.

Table of Growth Standards

Table 1. Minimal Acceptable Rates of Improvement by Starting Level and Grade Level

 

Norms

Recommendations

Grade

Fall-to-Spring

Screening

ROIs

Weekly

Monitoring

ROIs

Monitoring

Levels

Minimal/Ambitious

ROIs

End of Year

Levels

1

1.91

1.47

< 18

1.5 / 2.0

> 70

2

1.36

1.22

40 to 59

1.5 / 2.0

> 105

3

1.13

0.98

60 to 91

1.5 / 2.0

> 130

4

1.01

0.90

92 to 133

1.0 / 1.5

> 150

5

0.89

0.91

133 to 141

0.90 / 1.0

> 161

6

0.96

0.89

141 <

0.90 / 1.0

> 171

Note. Normative and criterion standards were used to set the recommendations for ROI.

a Norms for Fall-to-Spring Screening ROI were derived from large nationally representative samples (range, 6,485 to 44,102) of student performances during the fall-and-spring screening periods. The norms depict average growth in the typically developing population, which includes those cases with a fall score between the 30th and 85th percentiles in this case.

b Norms for Weekly ROI were derived from modest nationally representative samples of ROIs (range, 640 to 2,982) with 10 to 30 weeks of data from weekly monitoring. These are presented together with the Norms for Fall-to-Spring to illustrate that normative growth in the population is often greater than normative growth among those who receive intervention and weekly monitoring, which is not an acceptable outcome.

c Recommended for the Monitoring Levels and End of Year Levels correspond with research-based benchmark estimates. Students within the Monitoring Levels are at risk for reading deficits and are likely to perform below the 40th percentile on nationally normed assessments and below proficiency on national standards. Those students should be considered for intervention and monitoring to accelerate their ROI and meet the End of Year benchmarks.

d Recommended ROI Minimal/Ambitious are benchmark levels for the minimal acceptable ROIs by grade level and Recommended Monitoring Level. The minimal acceptable standards are presented along with recommendation for more ambitious ROIs (Deno et al., 2001; Fuchs et al., 1993). These recommendations are useful to set expectations for ROIs when evidence-based interventions are implemented with adequate intensity (4 days a week for 20 min each day). The goal ROI should be sufficiently ambitious to approximate the End of Year Level, but not so ambitious as to make the goal unattainable.

Rates of Improvement: Weekly Data Collection - Intervention

The evidence in this section summarizes the sample, procedures, and results for the analysis of extant data (i.e., progress monitoring data sampled from the FAST™ database). These are of students who received supplemental, intensive, or special education services. All data were collected by practitioners as part of routine practice. Evidence for the reliability of the slope estimates was previously submitted and reviewed in response to GOM 2.

Participants and Setting

Data were sampled to approximate a nationally representative sample by geographic region. All of the students in the sample were identified to received supplemental, intensive, or special education services for reading.

Procedures

All cases had data collected approximately weekly for at least 10 to 30 weeks with nine or more alternate forms. The sample was trimmed to exclude unlikely and extreme ROIs, such that the range of ROIs in the sample was approximately 0.0 to 5.0 words gained per week.

Linear and nonlinear models were fit to the data. Consistent with the results of peer-reviewed published research (Fuchs et al., 1993), there was sufficient fit for the more parsimonious linear models across grades. 

Results

The estimated rates of growth are presented in Table 2.

Table 2. Normative Rates of Improvement: Long-Term Monitoring (10 to 30 weeks)

Grade

N

Mean

SD

Skew

Kurtosis

1

1,209

1.47

0.72

0.68

1.01

2

2,982

1.22

0.61

0.69

1.08

3

2,400

0.98

0.57

1.12

2.62

4

2,066

0.90

0.55

1.21

2.87

5

1,310

0.91

0.56

1.30

4.37

6

640

0.89

0.59

1.41

3.27

Rates of Improvement: Tri-Annual Screening

The evidence in this section summarizes the sample, procedures, and results for normative rates of improvement based on tri-annual screening. Although these analyses and evidence are not required by NCII, they provide additional context for the interpretation and use of ROI norms and recommendations. It will be helpful to include them on NCII.

Participants and Setting

Data were sampled from among that in the database. Demographic data were not available at the time of the analysis. Sample size by season and grade ranged from 6,485 to 44,102 (Table 3, see table notes). The students in the “Typical” sample were not monitored weekly and there was no indication that any appreciable number received weekly monitoring or intervention. The data from within the data system provided evidence that an appreciable number of students below the 30th percentile were monitored, either monthly or weekly.

Procedures

Three CBMreading passages were administered in each of the fall, winter, and spring seasons. Rates of improvement that are estimated with tri-annual screening data sometimes vary across season (Christ et al., 2010). To provide a thorough set of norms to users, rates of improvement were estimated for fall to winter, winter to spring, and fall to spring. Rates of improvement were also estimated within three strata: Low Achieving (< 30th percentile), Typical (30th to 84th percentiles), and High Achieving (> 84th percentile).

Results

The estimated rates of growth are presented in Table 3.

Table 3: Rates of Improvement from Tri-Annual Screening

Grade

ROI

Fall-Winter

ROI

Winter-Spring

ROI

Fall-Spring

 

Students Scoring Below the 30th Percentile

1

1.13 (0.62)

1.32 (0.70)

1.39 (0.65)

2

1.55 (0.68)

1.21 (0.57)

1.38 (0.45)

3

1.46 (0.66)

1.16 (0.65)

1.27 (0.42)

4

1.21 (0.58)

1.00 (0.62)

1.05 (0.37)

5

1.12 (0.60)

0.96 (0.62)

0.96 (0.35)

6

1.16 (0.69)

1.08 (0.67)

1.00 (0.38)

 

Students Scoring Between the 30th and 84th Percentiles (Typical)

1

2.21 (0.74)

1.62 (0.65)

1.91 (0.50)

2

1.64 (0.61)

1.08 (0.64)

1.36 (0.38)

3

1.31 (0.64)

0.94 (0.66)

1.13 (0.37)

4

1.12 (0.63)

0.92 (0.69)

1.01 (0.38)

5

0.96 (0.64)

0.86 (0.70)

0.89 (0.38)

6

1.01 (0.70)

1.00 (0.73)

0.96 (0.40)

 

Students Scoring Above the 84th Percentile

1

2.01 (1.97)

1.29 (0.82)

1.62 (0.49)

2

1.14 (0.73)

0.80 (0.79)

1.03 (0.45)

3

0.86 (0.73)

0.69 (0.82)

0.86 (0.44)

4

0.75 (0.74)

0.70 (0.76)

0.79 (0.43)

5

0.68 (0.79)

0.62 (0.83)

0.74 (0.45)

6

0.69 (0.78)

0.77 (0.80)

0.78 (0.44)

Note. Scores represent mean values, with standard deviations in parentheses.

Nfirstgrade = 37,213Fall; 44,102Winter; 43,514Spring, Nsecondgrade = 39,578Fall; 41,654Winter; 41,425Spring

Nthirdgrade = 38,688Fall; 40,984Winter; 40,468Spring, Nfourthgrade = 30,456Fall; 33,011Winter; 31,804Spring

Nfifthgrade = 24,006Fall; 26,565Winter; 25,974Spring, Nsixthgrade = 6,596Fall; 7,727Winter; 6,485Spring

 

Rates of Improvement: Published Research

The FAST™ online system also provides guidance to set goals and rates of improvement for each individual student. The minimal and ambitious standards provide a range of ROIs to encourage goals that will promote sufficient improvement for students to approximate their end-or-year benchmarks.

There are technology-embedded supports to reference minimal and ambitious rates of improvement (e.g., Deno et al., 2001; Fuchs et al., 1993). Those recommendations (Table 1) correspond with ROIs reported in published research for students at risk or with disabilities when provided evidence-based instructional programs with adequate intensity and fidelity: “mean CBM improvement for students with learning disabilities … was 1.39 words per week” (Deno et al., p. 518) and “Sufficient evidence exists to recommend a growth rate of 2 words per week in reading aloud” (Deno et al., p. 521).

Basis for Minimal Acceptable Rates of Improvement

Normative and criterion growth rates provide the basis for minimal acceptable growth.

In addition to the normative rates of improvement, FAST™ encourages users to consider more ambitious rates during periods of intervention (Fuchs et al., 1993). Students can be expected to grow more than the minimum normative standard, and they might achieve gains of 1.5 to 2.0 word per week (Deno et al., 2001; Fuchs et al., 1993).

References

Christ, T. J., Silberglitt, B., Yeo, S., & Cormier, D. (2010). Curriculum Based Measurement of Oral Reading (CBM-R): An evaluation of growth rates and seasonal effects among students served in general and special education. School Psychology Review, 39(3), 343-349.

Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30(4), 507-524.

Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect. School Psychology Review, 22, 27-48.

Shinn, M. R. (Ed.) (1989). Curriculum-based measurement: Assessing special children. New York: Guildford Press

End-of-Year Benchmarks

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes

a. Specify the end-of-year performance standards:

 

CBMReading Cut Scores

Standards for Words Read Correct per Minute

Grade

Fall

Winter

Spring

1st

15

(14)

24

(36)

56

2nd

(40)

56

(59)

78

(70)

95

3rd

(60)

76

(74)

93

(86)

108

4th

(94)

114

(103)

128

(120)

140

5th

(108)

123

(113)

131

(117)

140

6th

(100)

119

(102)

131

(120)

140

Standards are in bold. That level of performance indicates that students are 80% likely on track

Students below those standards are less likely to be on track.

High risk indicators are in parentheses. Students at or below those levels are less than 20% likely to be on track.

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced

c. Specify the benchmarks:

See table above.

d. Basis for specifying these benchmarks?

Criterion-referenced

Procedure for specifying benchmarks for end-of-year performance levels:

The FAIP benchmarks are based on linking studies with DIBELS and AIMSweb. We use psychometric and statistical procedures to link the FAIP score scale to the DIBELS and AIMSweb score scales. The benchmarks for those more widely used assessment systems are then incorporated into the FAIP. This allows users to compare performance on FAIP to performance on those other passage-sets – and their benchmarks.

Linking requires that some component of the data collection design be common across forms. In the FAIP Linking study, this common component was the examinee group. Both FAIP and DIBELS Next passages were administered to approximately 500 students at each grade level. This is referred to in the educational measurement literature as a single-group linking design. The tables below include the number of students by grade and reading level, and the number of passages corresponding to each group.

Measure

Grade

Mean

SD

Skew

Kurt

N

FAIP

1

35.10

30.48

1.83

6.93

432

 

2

65.13

38.62

0.61

3.02

466

 

3

83.88

37.13

0.19

2.80

490

 

4

112.21

40.40

-0.10

2.81

527

 

5

117.66

38.99

-0.15

2.71

520

 

6

140.61

34.05

-0.56

3.85

212

DIBELS

1

31.18

22.60

1.29

6.13

78

 

2

59.72

35.23

0.76

3.71

437

 

3

77.19

35.10

0.36

3.14

452

 

4

87.33

39.74

0.26

2.61

486

 

5

105.59

39.47

0.06

2.49

486

 

6

126.27

30.92

-0.25

2.93

211

 

 

 

GA

MN

NY

Measure

Grade

Mean

N

Mean

N

Mean

N

FAIP

1

41.87

152

32.20

208

29.18

72

 

2

69.26

155

66.22

237

52.99

74

 

3

84.75

188

88.88

231

65.31

71

 

4

110.56

198

120.58

234

95.05

95

 

5

115.71

208

124.05

219

106.97

93

 

6

-

0

-

0

140.61

212

DIBELS

1

-

0

-

0

31.18

78

 

2

67.95

129

59.82

234

45.05

74

 

3

81.28

153

79.55

229

60.54

70

 

4

88.92

157

92.70

233

71.69

96

 

5

108.95

175

107.22

218

95.42

93

 

6

-

0

-

0

126.27

211

With scores across passages for the same individuals, any change in performance can be attributed to one passage being easier or harder than another. However, this attribution requires that the passages be administered at the same time point, which they were.

Within each reading level, the linking process involved three steps. First, the median FAIP performance at each grade level was chosen to define the measurement scale at each grade level. Second, the linear linking slope and intercept were obtained for each passage. Finally, raw scores were converted to equated scores on the reference scale using the corresponding slopes and intercepts.

DIBELS Next benchmarks were reviewed and converted to the FAIP scale. A similar process is planned for AIMSweb. The data are collected and analysis is in progress.

The results of this process are statistically linked benchmarks between FAIP and DIBELS Next. Our benchmarks will update as the DIBELS Next Benchmarks update.

Sensitive to Student Improvement

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

1. Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average):

The correlation between positive growth (x > 0) as measured by CBMreading slopes and positive growth on MAP measured by Fall to Spring gain scores (y > 0) was statistically significant r(295) = 0.15, p = 0.01 (95% CI = 0.03 - 0.26). Approximately 297 students in grades 1-6 were measured for the analysis.     

The correlation between positive growth (x > 0) measured by CBMreading slopes and positive growth (y > 0) on the TOSREC as measured by 3-4 observations across the school year was statistically significant r (372) = 0.31, p < 0.001 (95% CI = 0.22 - 0.40). Approximately 374 students in grades 1-6 were measured for the analysis.

Non-parametric permutation tests were conducted for 117 grade one, 139 grade two, 136 grade three, 141 grade four, 116 grade five, and 48 grade six students. Each student was measured between 15 – 30 weeks. To conduct the permutation test, for each student, the student’s rate of improvement (slope) was first calculated. Then, the words read correct scores were randomly sampled with replacement and a new slope was calculated (with the same x-values). The re-sampling and re-computation occurred 5,000 times per student. From this distribution of calculated slopes, the z-score of the first calculated slope was computed in relation to the distribution of bootstrapped slopes for each student. Bootstrapping is a powerful alternative to traditional significance testing. Instead of evaluating a significant difference from 0, bootstrapping allows for an evaluation of meaningful growth estimates. That is observed growth is compared to growth that would result purely from chance – not necessarily 0. The table below shows the percentage of significant positive growth estimates by grade (i.e., Z-score >= 1.96).

Grade

n

Proportion Significant Growth Estimates

1

117

0.80

2

139

0.86

3

136

0.83

4

141

0.74

5

116

0.71

6

48

0.75

 

Correlation Coefficients between CBMreading Slopes, AIMSweb, and DIBELS Next Slopes

Passage Level

AIMSweb Slope

 

DIBELS Next Slope

 

n

Weeks of Monitoring [range]

Coefficient (95% CI)

 

n

Weeks of Monitoring

[range]

Coefficient (95% CI)

A

 

n = 59

42 – Grade 1

15 -Grade 2

4– Grade 3

10 - 30

0.95

(0.92 - 0.97)

 

n = 75

42 – Grade 1

27 – Grade 2

6 – Grade 3

10 – 30

0.76

(0.65 - 0.85)

 

B

n = 108

6  - Grade 1

41 – Grade 2

38 – Grade 3

15 – Grade 4

7 – Grade 5

1 – Grade 6

 

10 - 30

0.85

(0.79 - 0.90)

 

n = 253

6 – Grade 1

113 – Grade 2

91 – Grade 3

27 – Grade 4

14 – Grade 5

2 – Grade 6

10 – 30

0.75

(0.69 - 0.80)

 

C

n = 116

49 – Grade 4

44 – Grade 5

23 – Grade 6

10 - 30

0.64

(0.52 - 0.74)

 

n = 293

130 – Grade 4

112 – Grade 5

51 – Grade 6

 

10 – 30

0.50

(0.38 - 0.61)

 

 

Based on this table there is a strong relationship between growth estimates derived from CBMreading, and two other similar oral reading fluency progress monitoring measures.

 

Decision Rules for Changing Instruction

Grade123456
Ratingdashdashdashdashdashdash

Decision Rules for Increasing Goals

Grade123456
Ratingdashdashdashdashdashdash

Improved Student Achievement

Grade123456
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble
We analyzed grade level performance of a suburban Midwest LEA (Schools = 8) with aReading data (broad measure of reading achievement). Performance data from the fall of year 1 implementation (i.e., pre-test) were compared to fall of year 2 implementation (e.g., post-test) to evaluate the effect of FAST progress monitoring and data based decision making (DBDM). That is, the difference in performance between second graders in 2013 (control, M = 463) and second graders in 2014 (after teachers implemented FAST for one year; M = 469) might be attributed, in part, to the FAST DBDM. We analyzed scores in first and second grade using independent samples t-tests to show the improvements of using FAST CBM-Reading in kindergarten and first grade. Then, we calculated Cohen’s as an index of effect size. We did not analyze scores for first grade because these students are not testing using CBM-Reading in kindergarten, and the fall of first grade would not show effects of using the measure.

There were statistically significant differences with meaningful effect sizes in both general and special education samples. This was observed at the district and school levels. Although not all differences were statistically significant (viz., special education) that is attributed to statistical power—as effect sizes were still robust. The observed effect sizes converge with the findings of Kingston and Nash (2011). These are meaningful and important improvements that replicated across all grades and populations except sixth grade special education.

 

FAST Statistical Significance and Effect Sizes

 

 

2013

2014

 

Grade

Group

M

SD

N

M

SD

N

t

df

p

Cohen’s d

2nd

GenEd

463.3

25.8

578

469.46

27.7

523

3.80

1099

0.00*

0.23

SpEd

445.6

25.1

51

451.63

33.9

46

0.10

95

0.32

0.20

3rd

GenEd

484.5

19.3

507

492.41

24.3

559

5.83

1064

0.00*

0.36

SpEd

470.2

25.7

46

474.63

26.6

57

0.86

101

0.40

0.17

4th

GenEd

498.3

22.0

507

504.62

21.7

513

4.61

1018

0.00*

0.29

SpEd

476.7

26.1

62

487.88

28.8

52

2.17

112

0.03*

0.41

5th

GenEd

504.3

19.4

483

514.68

23.2

505

7.54

986

0.00*

0.48

SpEd

490.8

26.1

76

498.52

24.9

64

1.77

136

0.08

0.30

6th

GenEd

508.8

25.1

431

522.29

24.1

475

8.23

904

0.00*

0.55

SpEd

506.3

20.0

71

506.09

29.5

80

-0.04

149

0.97

-0.01

Note. Adaptive Reading (aReading) is a computer adaptive test of broad reading on a score scale range of 350 to 650, which span K to 12th grade achievement. Aggregate data are presented for 8 schools. Grade level performance was compared across the first two years of implementation.

These are independent samples, but from the same districts. We compared year 1 data to year 2 data after implementation of PM. Year 1 is considered the comparison group.

Characteristics of students in sample and how they were selected for participation in study:

The sample for this study was derived from a total sample of 23,008 students (49.4% Female). Approximately 89.3% of the sample of students were not eligible for special education services. The sample consisted of approximately 46% White, 19.1% African American, 14% Hispanic, 13% Asian, 6.9% Multiracial, and 1% American Indian or Alaska Native. We believe this is a representative sample.

Administration Fidelity

Teachers were trained to administer the measures through an automated training system in the FastBridge system. Results from teacher/administrator use in this system is monitored by district personnel and users must attain 95% or higher levels accuracy.

Measures

External outcome measures used in the study, along with psychometric properties:

Measure Name

Reliability Statistics

CBMReading

Internal consistency reliability across grade levels =0.90-0.92

aReading

Internal consistency reliability=0.95

 

 

 

 

 

Results:

Effect sizes for each outcome measure are in our original submission

Summary:

Use of the FastBridge tools (including CBMReading) resulted in student improvement across time and use by teachers. (As stated in our earlier submission)We observed statistically significant differences with meaningful effect sizes (p < 0.05). The observed effect sizes converge with the findings of Kingston and Nash (2011). These are meaningful and important improvements that replicated across grades. Thus, it appears that CBMReading progress monitoring may contribute to the increase of scores in a broad measure of reading. 

 

Improved Teacher Planning

Grade123456
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Describe evidence that teachers' use of the tool results in improved planning:

In a teacher-user survey, 82% of teachers indicated that FAST assessment results were helpful in making instructional grouping decisions (n = 401).  82% of teachers also indicated that assessment results helped them adjust interventions for students who were at-risk (n = 369).  Finally, a majority of teachers indicated that they look at assessment results at least once per month (66%), and nearly a quarter of teachers indicated that they look at assessment results weekly or even more often (n = 376).