FAST CBMReading Spanish

Reading

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

The Formative Assessment System for Teachers (FAST) is a cloud-based suite of assessment and reporting tools that includes CBMReading Spanish. As of 2013-14, there is a $5 per student per year charge for the system. As a cloud-based assessment suite, there are no hardware costs or fees for additional materials.

Computer and internet access is required for full use.

Testers will require less than 1 hour of training.

Paraprofessionals can administer the test.

FastBridge Learning
520 Nicollet Mall
Suite 910
Minneapolis, MN 55402-1057
Phone: 612-254-2534

Field tested training manuals are included and should provide all implementation information.

Access to interactive online self-guided teacher training is included at no additional cost. In-person training is available at an additional cost of $300 per hour.

CBMReading Spanish is an assessment for use monitor student progress in reading achievement in the primary grades (1-5). The automated output of each assessment gives information on the accuracy and fluency of passage reading which can be used to determine instructional level to inform intervention.

To administer the measure, an examiner listens to the child read a set of short passages aloud. Each passage is read for one minute while the examiner uses the software to mark omissions, insertions, substitutions, hesitations, and mispronunciations as errors. The number of words read correctly per minute (WRCM) is then scored using the online application.

This tool provides information on students in Spanish. Evidence was based on a sample of Native English-speakers in a Spanish-language immersion school. 

Administration takes approximately 1-5 minutes per student, depending on the number of passages administered. Additional scoring time required is less than 1 minute.


Forms correspond to student-ability level rather than grade. All forms are divided into Levels A, B and C, which correspond to 1st grade, 2nd and 3rd grade, and 4th and 5th grade reading levels, respectively. There are 20 Level A passages, 20 Level B, and 20 Level C passages. In addition there are three passages per grade for screening purposes.

Raw scores are calculated by first counting how many words were read in one minute, and then subtracting the number of errors from that total. The result is the number of words read correctly per minute. If the student finishes the passage in less than 1 minute, the score per minute is adjusted. When bundles of passages are administered, the median of three passages is used for future analysis. All of this is done automatically by the FAST system.

 

Reliability of the Performance Level Score

Grade12345
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Type of Reliability

Age or Grade

n (range)

Coefficient

 

SEM

Information (including normative data) / Subjects

Delayed Test-Retest

Grade 2

(n = 258)

~12 Week Delay

[Fall to Winter]

0.89

(95% CI = 0.86 - 0.91)

-

Data drawn from 3 schools in Minnesota

 

Grade 3

(n = 268)

~12 Week Delay

[Fall to Winter]

0.91

(95% CI = 0.89 - 0.93)

-

 

 

Grade 4

(n = 243

~12 Week Delay

[Fall to Winter]

0.76

(95% CI = 0.70-0.81)

-

 

 

Grade 5

(n = 232)

~12 Week Delay

[Fall to Winter]

0.92

(95% CI = 0.90 - 0.94)

-

 

 

Grade 1

(n =  134)

~32 Week Delay

[Fall to Spring]

0.89

(95% CI = 0.85 - 0.92)

-

 

 

Grade 2

(n = 256)

~ 32 Week Delay

[Fall to Spring]

0.85

(95% CI = 0.81 - 0.88)

-

 

 

Grade 3

(n = 258)

~32 Week Delay

[Fall to Spring]

0.86

(95% CI = 0.82 - 0.89)

-

 

 

Grade 4

(n = 248)

~32 Week Delay

[Fall to Spring]

0.84

(95% CI = 0.80 - 0.88)

-

 

 

Grade 5

(n = 232)

~32 Week Delay

[Fall to Spring]

0.89

(95% CI = 0.86 - 0.92)

-

 

Alternate Forms

 

Grade 2

(n = 89)

0 Week Delay

0.96 (0.94 -0.98)

Mdn = 3.54

0 – 15.56

Data drawn from 3 schools in Minnesota

 

Grade 3

(n = 80)

0 Week Delay

0.90 (0.85 - 0.94)

Mdn = 5.30

0 – 15.00

 

 

Grade 4

(n = 92)

0 Week Delay

0.89 (0.84 - 0.93)

Mdn = 8.49

0.71 – 28.28

 

 

Grade 5

(n = 80)

0 Week Delay

0.83 (0.75 - 0.89)

Mdn = 5.30

0 – 40.30

 

 

Reliability of the Slope

Grade12345
RatingEmpty bubbleFull bubbleFull bubbleFull bubbleFull bubble

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

Range

Median

Split-Half

1

46

 

0.41

0.70

See sample description below.

Split-Half

2

43

 

0.61

0.34

See sample description below.

Split-Half

3

49

 

0.75

0.34

See sample description below.

Split-Half

4

15

 

0.68

0.40

See sample description below.

Split-Half

5

32

 

0.69

0.28

See sample description below.

The previous reliability coefficients were derived from a sample of approximately 185 1st grade through 5th grade students in the FAST system. Approximately 41.1% were female, 58.9% were male. Approximately 65.4% of the sample of students were White, 13.5% were African American, 14.1% were Hispanic, 4.3% were Asian, 2.7% were Multiracial. Approximately 89.2% of students were reported as not eligible for Special education services, while 10.8% of students were receiving special education services. 

 

Validity of the Performance Level Score

Grade12345
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

 

Information (including normative data) / Subjects

Predictive

Grade 2

(n = 41)

Aprenda*

~32 Week Delay

0.71

(95% CI = 0.51 -0.83)

Data drawn from 3 schools in Minnesota

Predictive

Grade 3

(n = 60)

Aprenda*

~32 Week Delay

0.64

(95% CI = 0.46 - 0.77)

 

Predictive

Grade 4

(n = 54)

Aprenda*

~32 Week Delay

0.40

(95% CI = 0.15 - 0.61)

 

Predictive

Grade 5

(n = 56)

Aprenda*

~32 Week Delay

0.54

(95% CI = 0.32 - 0.70)

 

Predictive

Grade 1

(n = 39)

Aprenda*

~20 Week Delay

0.70

(95% CI = 0.49 -0.83)

 

Predictive

Grade 2

( n =41)

Aprenda*

~20 Week Delay

0.75

(95% CI = 0.58 - 0.86)

 

Predictive

Grade 3

(n = 60)

Aprenda*

~20 Week Delay

0.69

(95% CI = 0.53 - 0.80)

 

Predictive

Grade 4

(n = 54)

Aprenda*

~20 Week Delay

0.38

(95% CI = 0.13 - 0.59)

 

Predictive

Grade 5

(n = 56)

Aprenda*

~20 Week Delay

0.50

(95% CI = 0.27 - 0.68)

 

Concurrent

Grade 1

(n = 39)

Aprenda*

-

0.73

(95% CI = 0.53 - 0.85)

 

Concurrent

Grade 2

(n = 41)

Aprenda*

-

0.80

(95% CI = 0.65 -0.89)

 

Concurrent

Grade 3

(n = 60)

Aprenda*

-

0.72

(95% CI = 0.57 - 0.82)

 

Concurrent

Grade 4

(n = 54)

Aprenda*

-

0.45

(95% CI = 0.20 - 0.64)

 

Concurrent

Grade 5

(n = 55)

Aprenda*

-

0.42

(95% CI = 0.16 - 0.61)

 

*Reading comprehension in Spanish was measured by student performance on the full scaled score for reading on the Aprenda 3 (i.e., Total Lectura). The Aprenda is a culturally inclusive, group-administered, standardized test developed by Hispanic educators and modeled on the Stanford Achievement Test (SAT-10). The test was normed on U.S. Spanish-speaking students as well as students from Mexico and Puerto Rico in spring and fall of 2004 (Pearson, 2005). Kuder Richardson 20 reliability was between 0.77 and 0.96. Criterion-related validity was between 0.11 and 0.66 with the Naglieri Nonverbal Ability Test in the spring of each grade level (Pearson, 2005).  

Predictive Validity of the Slope of Improvement

Grade12345
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

(p-value)

Information (including normative data)/Subjects

Predictive

Grade 2

(n = 41)

Aprenda*

0-1 Week Delay

0.53

(<0.001)

Data drawn from three schools in Minnesota

Predictive

Grade 3

(n = 60)

Aprenda*

0-1 Week Delay

0.42

(<0.001)

 

Predictive

Grade 4

(n = 54)

Aprenda*

0-1 Week Delay

0.21

(0.12)

 

Predictive

Grade 5

(n = 55)

Aprenda*

0-1 Week Delay

-0.23

(0.09)

 

*Reading comprehension in Spanish was measured by student performance on the full scaled score for reading on the Aprenda 3 (i.e., Total Lectura). The Aprenda is a culturally inclusive, group-administered, standardized test developed by Hispanic educators and modeled on the Stanford Achievement Test (SAT-10). The test was normed on U.S. Spanish-speaking students as well as students from Mexico and Puerto Rico in spring and fall of 2004 (Pearson, 2005). Kuder Richardson 20 reliability was between 0.77 and 0.96. Criterion-related validity was between 0.11 and 0.66 with the Naglieri Nonverbal Ability Test in the spring of each grade level (Pearson, 2005).  

Bias Analysis Conducted

Grade12345
RatingNoNoNoNoNo

Disaggregated Reliability and Validity Data

Grade12345
RatingYesYesYesYesYes

Disaggregated Reliability of the Performance Level Score:

Please note that the sample reported on below includes students of different racial groups - some students were native English speakers and others were  native Spanish speakers, but all students were receiving reading instruction in Spanish.

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

Range

Median

Delayed Test Retest

1

6

 

0.99

 

Winter to Spring; Asian

Delayed Test Retest

2

6

 

0.71

 

Fall to Winter; Asian

Delayed Test Retest

3

9

 

0.98

 

Fall to Winter; Asian

Delayed Test Retest

3

9

 

0.96

 

Fall to Spring; Asian

Delayed Test Retest

4

10

 

0.97

 

Fall to Winter; Asian

Delayed Test Retest

4

10

 

0.96

 

Fall to Spring; Asian

Delayed Test Retest

4

10

 

0.93

 

Winter to Spring; Asian

Delayed Test Retest

5

8

 

0.97

 

Fall to Winter; Asian

Delayed Test Retest

5

8

 

0.96

 

Fall to Spring; Asian

Delayed Test Retest

5

8

 

0.95

 

Winter to Spring; Asian

Delayed Test Retest

1

20

 

0.95

 

Winter to Spring; African American

Delayed Test Retest

4

9

 

0.93

 

Fall to Winter; African American

Delayed Test Retest

4

9

 

0.70

 

Fall to Spring; African American

Delayed Test Retest

5

20

 

0.94

 

Fall to Winter; African American

Delayed Test Retest

5

20

 

0.95

 

Fall to Spring; African American

Delayed Test Retest

5

20

 

0.87

 

Winter to Spring; African American

Delayed Test Retest

1

38

 

0.88

 

Winter to Spring; Hispanic

Delayed Test Retest

1

24

 

0.94

 

2-3 Week Delay; Hispanic

Delayed Test Retest

2

35

 

0.80

 

Fall to Winter; Hispanic

Delayed Test Retest

2

35

 

0.90

 

Fall to Spring; Hispanic

Delayed Test Retest

2

36

 

0.85

 

Winter to Spring; Hispanic

Delayed Test Retest

2

22

 

0.87

 

2-3 Week Delay; Hispanic

Delayed Test Retest

3

25

 

0.89

 

Fall to Winter; Hispanic

Delayed Test Retest

3

25

 

0.82

 

Fall to Spring; Hispanic

Delayed Test Retest

3

25

 

0.91

 

Winter to Spring; Hispanic

Delayed Test Retest

3

18

 

0.80

 

2-3 Week delay; Hispanic

Delayed Test Retest

4

32

 

0.95

 

Fall to Winter; Hispanic

Delayed Test Retest

4

32

 

0.88

 

Fall to Spring; Hispanic

Delayed Test Retest

4

32

 

0.92

 

Winter to Spring; Hispanic

Delayed Test Retest

4

16

 

0.87

 

2-3 Week Delay; Hispanic

Delayed Test Retest

5

41

 

0.78

 

Fall to Winter; Hispanic

Delayed Test Retest

5

41

 

0.87

 

Fall to Spring; ;Hispanic

Delayed Test Retest

5

41

 

0.82

 

Winter to Spring; Hispanic

Delayed Test Retest

5

14

 

0.90

 

2-3 Week Delay; Hispanic

Delayed Test Retest

1

12

 

0.97

 

Winter to Spring; Multiracial

Delayed Test Retest

1

10

 

0.96

 

2-3 Week delay; Multiracial

Delayed Test Retest

2

15

 

0.91

 

Fall to Winter; Multiracial

Delayed Test Retest

2

15

 

0.91

 

Fall to Spring; Multiracial

Delayed Test Retest

2

15

 

0.98

 

Winter to Spring; Multiracial

Delayed Test Retest

2

13

 

0.98

 

2-3 Week delay; Multiracial

Delayed Test Retest

3

14

 

0.95

 

Fall to Winter; Multiracial

Delayed Test Retest

3

13

 

0.97

 

Fall to Spring; Multiracial

Delayed Test Retest

3

13

 

0.97

 

Winter to Spring; Multiracial

Delayed Test Retest

3

10

 

0.92

 

2-3 Week Delay; Multiracial

Delayed Test Retest

4

10

 

0.94

 

Fall to Winter; Multiracial

Delayed Test Retest

4

10

 

0.58

 

Fall to Spring; Multiracial

Delayed Test Retest

4

10

 

0.68

 

Winter to Spring; Multiracial

Delayed Test Retest

4

10

 

0.49

 

2-3 Week Delay; Multiracial

Delayed Test Retest

5

15

 

0.95

 

Fall to Winter; Multiracial

Delayed Test Retest

5

15

 

0.97

 

Fall to Spring; Multiracial

Delayed Test Retest

5

15

 

0.97

 

Winter to Spring

Delayed Test Retest

5

10

 

0.95

 

2-3 Week Delay; Multiracial

Delayed Test Retest

1

14

 

0.96

 

Fall to Winter; White

Delayed Test Retest

1

15

 

0.95

 

Fall to Spring; White

Delayed Test Retest

1

163

 

0.94

 

Winter to Spring; White

Delayed Test Retest

1

85

 

0.91

 

2-3 Week Delay; White

Delayed Test Retest

2

176

 

0.86

 

Fall to Winter; White

Delayed Test Retest

2

177

 

0.87

 

Fall to Spring; White

Delayed Test Retest

2

180

 

0.92

 

Winter to Spring; White

Delayed Test Retest

2

88

 

0.91

 

2-3 Week Delay; White

Delayed Test Retest

3

172

 

0.91

 

Fall to Winter; White

Delayed Test Retest

3

171

 

0.87

 

Fall to Spring; White

Delayed Test Retest

3

172

 

0.94

 

Winter to Spring; White

Delayed Test Retest

3

93

 

0.82

 

2-3 Week Delay; White

Delayed Test Retest

4

144

 

0.92

 

Fall to Winter; White

Delayed Test Retest

4

141

 

0.84

 

Fall to Spring; White

Delayed Test Retest

4

141

 

0.84

 

Winter to Spring; White

Delayed Test Retest

4

88

 

0.80

 

2-3 Week Delay; White

Delayed Test Retest

5

288

 

0.93

 

Fall to Winter; White

Delayed Test Retest

5

288

 

0.91

 

Fall to Spring; White

Delayed Test Retest

5

286

 

0.92

 

Winter to Spring; White

Delayed Test Retest

5

74

 

0.88

 

2-3 Week Delay; White

 

Alternate Forms

Grade12345
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

To identify parallel passages for each level, field testing was conducted with all passages initially developed (i.e., Level 1 = 25 initial passages, Level 2 = 30 passages, and Level 3 = 34 passages). The data from this field testing was analyzed to determine optimum passage selection for progress monitoring sets.

The goal of CBMReading -S passage set field testing was to reduce the passage sets to 20 passages progress monitoring passages. Descriptive statistics were calculated and then distance measures (Mahalanobis and Euclidian) along with examination of means, standard deviations, and item-total correlations[1] were completed. Within each level, homogeneity was thus determined primarily based on three criteria: 1) means, 2) standard deviations, and 3) distance measures.

Mean calculation. The mean score for each passage was obtained within each level.  A grand mean was then calculated (i.e., a mean of all mean scores).  An absolute value for the residual of each passage was then calculated.  For example, if the mean score across all means in a level was 110, and item 1 had a mean score of 90, the absolute value of the residual for item 1 was 20.  These absolute values were then sorted in descending order and the top X items were eliminated (where X represents the difference between the total number of original passages at each level and the total number of required for each level i.e., 20 for progress monitoring).

Standard deviation calculation.  The standard deviation for each item was obtained.  A mean standard deviation was then calculated (i.e., a mean of all standard deviations).  An absolute value for the residual of each passage was then calculated.  For example, if the mean standard deviation was 15, and item 1 had a standard deviation of 20, the absolute value of the residual for item 1 was 5.  These absolute values were then sorted in descending order and the top X items were eliminated (where X represents the difference between the total number of original passages at each level and the total number of required for each level i.e., 20 for progress monitoring).

Distance calculations. For the subjects by variables data matrix (where students are represented in the rows and passages are represented in the columns, corresponding to an s x p matrix), Mahalanobis and Euclidian distances were calculated iteratively for each of the students and their data from each passage. The process of eliminating a single passage after each iteration proceeded until 20 progress monitoring passages remained for Levels 1, 2 and 3, respectively.

Descriptive Statistics for Levels 1, 2, & 3 Passage Sets

Level 1 Passages CBMReading Spanish

Final Passage Order

Mean

Std. Deviation

Skewness

Kurtosis

Minimum

Maximum

1

59.17

23.696

0.874

2.501

16

139

2

59.78

24.264

0.782

1.463

19

135

3

58.67

21.480

1.356

4.555

24

140

4

61.37

22.291

0.877

2.586

22

137

5

57.79

24.031

0.764

1.633

18

137

6

62.71

24.728

0.777

1.913

19

146

7

56.00

22.673

1.154

2.457

17

132

8

63.78

25.784

0.482

0.920

19

137

9

55.42

22.456

1.240

3.416

17

137

10

64.59

24.566

0.883

2.481

18

149

11

55.28

24.087

1.120

2.542

26

134

12

64.50

24.145

0.880

1.386

20

134

13

55.43

21.214

0.858

2.253

22

126

14

63.11

23.207

0.959

2.083

23

141

15

57.35

22.303

1.151

2.209

26

130

16

62.64

26.901

0.430

0.256

19

138

17

58.59

21.437

1.204

3.187

23

131

18

61.24

22.017

0.584

0.315

29

121

19

58.72

24.540

0.859

1.230

21

136

20

59.48

23.589

1.454

3.820

24

142

 

 

Level 2 Passages CBMReading Spanish

Final Passage Order

Mean

Std. Deviation

Skewness

Kurtosis

Minimum

Maximum

1

74.49

27.741

0.614

0.950

22

159

2

75.40

29.798

0.790

0.737

22

160

3

73.13

28.515

0.615

0.758

26

163

4

76.63

27.537

0.615

0.982

27

161

5

72.87

27.873

0.299

0.089

24

148

6

77.39

28.584

0.095

-0.427

24

141

7

72.31

27.248

0.960

1.986

25

167

8

78.46

30.157

0.786

0.913

29

176

9

71.58

25.147

0.550

1.186

27

149

10

78.90

27.570

0.392

0.152

32

161

11

71.29

24.869

0.187

-0.185

26

130

12

78.73

27.347

0.148

0.062

26

148

13

71.78

31.393

0.598

0.493

20

158

14

77.67

26.091

0.483

0.387

31

148

15

72.80

25.877

0.494

0.539

19

149

16

77.21

28.468

0.541

0.072

31

150

17

73.02

29.454

0.614

0.423

21

160

18

75.84

25.677

0.553

0.702

29

145

19

74.36

27.856

0.545

0.280

19

176

20

75.22

21.186

0.341

-0.607

39

124

 

 

Level 3 Passages CBMReading Spanish

Final Passage Order

Mean

Std. Deviation

Skewness

Kurtosis

Minimum

Maximum

1

97.19

33.080

-0.068

-0.269

28

176

2

97.94

29.327

-0.097

0.002

28

172

3

96.00

30.256

-0.427

-0.208

24

152

4

99.53

29.907

0.131

0.544

28

177

5

95.30

29.948

-0.144

-0.521

27

162

6

100.88

30.336

-0.066

-0.579

34

165

7

94.42

29.357

-0.051

-0.598

27

155

8

101.92

28.738

-0.297

-0.229

35

168

9

93.10

31.375

0.560

0.572

28

175

10

104.04

32.705

-0.145

-0.599

29

167

11

92.37

27.744

0.274

0.790

28

165

12

103.73

30.245

-0.586

-0.293

35

155

13

93.80

34.572

0.049

-0.395

24

169

14

101.40

29.674

-0.005

-0.332

34

160

15

95.16

28.982

-0.028

-0.237

37

170

16

100.04

26.428

-0.048

-0.553

46

150

17

95.51

26.759

0.115

0.051

36

167

18

99.42

29.902

-0.163

-0.790

35

154

19

96.73

31.647

-0.030

-0.269

27

177

20

97.94

28.062

-0.004

-0.159

40

173

 

 


[1] Cronbach’s alpha was considered as a criterion for eliminating items but this criterion was dropped after items showed extremely high reliability regardless of which items were eliminated.

 

 

2. Number of alternate forms of equal and controlled difficulty: Each level has 20 forms: Twenty passages for Level A and 20 each for Levels B & C.

 

Rates of Improvement Specified

Grade12345
Ratingdashdashdashdashdash

End-of-Year Benchmarks

Grade12345
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

a. Specify the end-of-year performance standards:

 

Standards for Words Read Correct per Minute

Grade

Fall

Winter

Spring

1st

-

-

(43)

49

(72)

80

2nd

(66)

72

(85)

91

(104)

116

3rd

(61)

66

(71)

74

(94)

107

4th

(105)

111

(125)

130

(133)

139

5th

(110)

120

(118)

126

(136)

141

Standards are in bold. That level of performance indicates that students are likely to be on track. Students below those standards are unlikely to be on track.

High risk indicators are in parentheses. Students at or below those levels are unlikely to be on track.

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced.

Receiver operating characteristic (ROC) curve analysis was conducted for each grade at each time point predicting end of year performance on the Aprenda. For grades 1-3 cut scores that balanced the specificity (x>0.70) and sensitivity (x>0.70) of predicting students at or below the 20th and 30th percentiles were calculated at each assessment period. Based on previous analysis, the predictive validity of CBMReading-S performance on Aprenda scores decreased at grades 4 and 5. As a result, performances at the 25th and 35th percentiles on the Aprenda were used to demarcate high risk and some risk respectively (to derive more stable base rates). Cross validation studies to determine the adequacy of these cut scores is currently underway. 

c. Specify the benchmarks:

See table above.

d. Basis for specifying these benchmarks?

Criterion-referenced.

Procedure for specifying end-of-year performance levels:

See above.

Sensitive to Student Improvement

Grade12345
Ratingdashdashdashdashdash

Decision Rules for Changing Instruction

Grade12345
Ratingdashdashdashdashdash

Decision Rules for Increasing Goals

Grade12345
Ratingdashdashdashdashdash

Improved Student Achievement

Grade12345
Ratingdashdashdashdashdash

Improved Teacher Planning

Grade12345
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Describe evidence that teachers’ use of the tool results in improved planning:

In a teacher-user survey, 82% of teachers indicated that FAST assessment results were helpful in making instructional grouping decisions (n = 401).  82% of teachers also indicated that assessment results helped them adjust interventions for students who were at-risk (n = 369).  Finally, a majority of teachers indicated that they look at assessment results at least once per month (66%), and nearly a quarter of teachers indicated that they look at assessment results weekly or even more often (n = 376).