FAST earlyReading

Area: Letter Names

 

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

The Formative Assessment System for Teachers (FAST) is a cloud-based suite of assessment and reporting tools that includes earlyReading English. As of 2013-14, there is a $5 per student per year charge for the system. As a cloud-based assessment suite, there are no hardware costs or fees for additional materials.

Computer and internet access is required for full use.

Testers will require less than 1 hour of training.

Paraprofessionals can administer the test.

earlyReading
43 Main St. SE
Suite 509
Minneapolis, MN 55414
Phone: 612-424-3710
 

Field tested training manuals are included and should provide all implementation information.

Access to interactive online self-guided teacher training is included at no additional cost. In-person training is available at an additional cost of $300 per hour.

earlyReading is used to monitor student progress in early reading in the early primary grades. Most earlyReading assessments provide information on both the accuracy and rate or efficiency of performance.

The appropriate progress monitoring assessment(s) is/are chosen based on screening performance and are used to diagnose and evaluate skill deficits. Those results help guide instructional and intervention development. It is recommended that Letter Names be used for progress monitoring throughout kindergarten, depending on specific student needs.

The Letter Naming task assesses the student’s ability and automaticity to name upper- and lower-case letters in isolation. The examiner and student each have the same page of letters available and that page is organized systematically as described later in this protocol. As the student names the letters aloud from a paper copy, the examiner marks errors on his/her paper or electronic copy. The resulting score is the number of letters named correctly in one minute.

Each earlyReading test takes approximately 1-2 minutes to administer. earlyReading is computer administered to individual students and scoring is automated; it does not require any additional time to score.

The Letter Naming assessment has 20 alternate forms.

Rate is calculated as the number of correct letter names read per minute. Raw scores of total and correct letter names are also provided. An inventory of known letter names can be generated. 

 

Reliability of the Performance Level Score: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM or CSEM*

Information (including normative data) / Subjects

range

median

Alternate Forms

K

36-37

0.82-0.92

0.88

5.07 (3.77)

Collected in Spring; see Table 1 below 

Test Retest

K

76

0.86-0.94

0.91

4.24*

Collected in Fall 2012

Delayed Test- Retest

K

1781

0.62-0.67

0.65

--

 

Table 1. Sample Demographics for Alternate Forms Study

Category

District A (%)

District B (%)

District C (%)

White

56.1%

93%

79.5%

Black

13.5%

4%

6.8%

Hispanic

10.3%

3%

4.5%

Asian/Pacific Islander

19.4%

4%

10.5%

American Indian/Alaskan Native

>.1%

1%

.25%

Free and Reduced Lunch

44.9%

17%

9%

LEP

15.8%

6%

6%

Special Education

12.6%

10%

10%

 

Reliability of the Slope: Convincing Evidence

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Split-Half

K 151 - 0.76 0.29  

Reliability for the Slope

K 126 - 0.67 - Duration of Progress Monitoring greater than 10 weeks.

Reliability for the Slope

K 25 - 0.54 - Duration of Progress Monitoring between 6 and 10 weeks.

Reliability for the Slope

K 21 - 0.50 - Duration of Progress Monitoring greater than 10 weeks.

 

Validity of the Performance Level Score: Convincing Evidence

The aggregate (full scale) score for the GRADE was used to estimate all criterion validity coefficients unless otherwise noted.

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

Range

Median

Concurrent

K

GRADE composite Level P

85

 

0.41

Participants included kindergarten students from two school districts. In School District 1 three elementary schools participated. Kindergarten students from District 1 who participated in the study were enrolled in all day or half day kindergarten. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). 40 to 50 percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district were White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%). 40 to 50 percent of students at each school were on free and reduced lunch. 

Concurrent

K

GRADE composite Level K

214

 

0.18

Data collected in Spring; See subject information above

Predictive

K

GRADE composite  Level K

230

 

0.47

Fall to Spring prediction; See subject information above

Predictive

K

GRADE composite Level K

210

 

0.63

Winter to Spring prediction; See subject information above

 

Predictive Validity of the Slope of Improvement: Unconvincing Evidence

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data)/Subjects

range

median

Predictive Validity of Slope

K

GRADE

Composite Level K

231

-

0.44

Data collected in fall, winter, and spring. Participants included kindergarten students from two school districts. In School District 1 three elementary schools participated. Kindergarten students from District 1 who participated in the study were enrolled in all day or half day kindergarten. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). 40 to 50 percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district were White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%). 40 to 50 percent of students at each school were on free and reduced lunch. 

 

Disaggregated Reliability and Validity Data: Unconvincing Evidence

Disaggregated Reliability of the Performance Level Score:

The following disaggregated delayed test retest reliability coefficients were derived from a sample of approximately 15,985 Kindergarten students in the FAST system. Approximately 31.1% were female, and 33.5% were male, with approximately 35.4% of the sample not reporting their gender. Approximately 42.5% of the sample of students were White, 8.6% were African American, 5.1% were Hispanic, 3.6% were Asian, 1.9% were recorded as “Other”, 1.7% were Multiracial, 1.2% were American Indian or Alaska Native, and 0.1% were Native Hawaiian or Other Pacific Islander. Approximately 35.4% of the sample did not report ethnicity/race. Approximately 53.7% of students were reported as not eligible for Special education services, while 10.8% of students were receiving special education services. However, approximately 35.4% of the sample did not report special education status or receipt of services.

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

Range

Median

Delayed Test Retest

K

117

-

0.67

-

Fall to Winter; American Indian/Alaska Native

Delayed Test Retest

K

61

-

0.48

-

Fall to Spring; American Indian/Alaska Native

Delayed Test Retest

K

69

-

0.74

-

Winter to Spring; American Indian/ Alaska Native

Test Retest

K

22

-

0.87

-

2-3 Week Delay; American Indian/Alaska Native

Delayed Test Retest

K

361

-

0.68

-

Fall to Winter; Asian

Delayed Test Retest

K

288

-

0.63

-

Fall to Spring; Asian

Delayed Test Retest

K

298

-

0.73

-

Winter to Spring; Asian

Test Retest

K

144

-

0.72

-

2-3 Week Delay; Asian

Delayed Test Retest

K

826

-

0.64

-

Fall to Winter; African American

Delayed Test Retest

K

732

-

0.58

-

Fall to Spring; African American

Delayed Test Retest

K

777

-

0.72

-

Winter to Spring; African American

Test Retest

K

347

-

0.81

-

2-3 Week Delay; African American

Delayed Test Retest

K

443

-

0.6

-

Fall to Winter; Hispanic

Delayed Test Retest

K

386

-

0.45

-

Fall to Spring; Hispanic

Delayed Test Retest

K

410

-

0.74

-

Winter to Spring; Hispanic

Test Retest

K

180

-

0.71

-

2-3 Week Delay; Hispanic

Delayed Test Retest

K

179

-

0.57

-

Fall to Winter; Multiracial

Delayed Test Retest

K

143

-

0.48

-

Fall to Spring; Multiracial

Delayed Test Retest

K

148

-

0.71

-

Winter to Spring; Multiracial

Test Retest

K

81

-

0.87

-

2-3 Week Delay

Delayed Test Retest

K

4225

-

0.64

-

Fall to Winter; White

Delayed Test Retest

K

3480

-

0.5

-

Fall to Spring; White

Delayed Test Retest

K

3306

-

0.65

-

Winter to Spring; White

Test Retest

K

1138

-

0.71

-

2-3 Week Delay; White

 

Disaggregated Reliability of the Slope:

The following disaggregated reliability of the slope coefficients were derived from a sample of approximately 907 1st grade students and 1,180 Kindergarten students in the FAST system (N = 2,087). Approximately 33.7% were female, 44.8% were male, and 21.5% of students did not report gender. Approximately 45.9% of the sample of students were White, 11.5% were African American, 7.2% were Hispanic, 6.8% were Asian, 1.2% were recorded as “Other”, 3.3% were Multiracial, 2.2% were American Indian or Alaska Native, and 0.4% were Native Hawaiian or Other Pacific Islander. Approximately 21.5% of the sample did not report ethnicity/race. Approximately 59.9% of students were reported as not eligible for Special education services, while 18.5% of students were receiving special education services. Approximately 21.5% of students did not report Special education status (i.e., receipt of services). 

Type of Reliability

Age or Grade

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Reliability for the Slope

K

107

-

0.49

White

Reliability for the Slope

K

13

-

0.76

Hispanic

Reliability for the Slope

K

14

-

0.99

African American

 

Disaggregated Validity of the Performance Level Score:

The following disaggregated aReading validity coefficients were derived from a sample of approximately 17,137 Kindergarten students in the FAST system. Approximately 32.2% were female, and 34.7% were male, with approximately 33% of the sample not reporting their gender. Approximately 42.6% of the sample of students were White, 8.5% were African American, 4.9% were Hispanic, 3.5% were Asian, 4.4% were recorded as “Other”, 1.7% were Multiracial, 1.2% were American Indian or Alaska Native, and 0.1% were Native Hawaiian or Other Pacific Islander. Approximately 33% of the sample did not report ethnicity/race. Approximately 55.5% of students were reported as not eligible for Special education services, while 3.5% of students were receiving special education services. Approximately 40.9% of students did not report their special education status.

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Predictive

K

aReading

8

-

0.41

American Indian/Alaska Native; Fall to Spring prediction

Concurrent

K

aReading

138

-

0.33

American Indian/Alaska Native; Data collected in the Winter

Predictive

K

aReading

138

-

0.36

American Indian/Alaska Native; Winter to Spring prediction

Predictive

K

aReading

149

-

0.26

American Indian/Alaska Native; Fall to Winter Prediction

Predictive

K

aReading

97

-

0.70

Asian; Fall to Spring Prediction

Predictive

K

aReading

25

-

0.57

Asian; Winter to Spring Prediction

Predictive

K

aReading

51

-

0.79

African American; Fall to Spring Prediction

Predictive

K

aReading

14

-

0.72

African American; Winter to Spring Prediction

Predictive

K

aReading

37

-

0.53

Hispanic; Fall to Spring prediction

Predictive

K

aReading

28

-

0.52

Multiracial; Fall to Spring prediction

 

Alternate Forms: Convincing Evidence

1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance: All 26 letters in the English alphabet were used. Each letter was used once in upper-case and once in lower-case. Every form includes each letter once in upper-case and once in lower-case, for a total of 52 letters per form with 40 extra letters to account for student variation in letter naming. Each form is organized so that every row alternates with all upper-case or lower-case letters. For example, the first row is all lower-case and the second row all upper-case, and so on. Within the first 26 letters, each letter of the English alphabet is represented in either the upper-case or lower-case rows. The second set of 26 letters contains the opposite upper or lower-case letter. Upper-case and lower-case letters were each categorized as “dissimilar” or “same/moderate similarity.” The first two lower-case letters were randomly chosen from the “same/moderate similarity” category. The third letter was randomly chosen from the “dissimilar” category. Each set of three letters thereafter contained one randomly chosen “dissimilar” letter and two “same/moderate similarity” letters. The order for each set of three was randomly chosen after the first set. There are a total of 10 rows, with 10 letters in each row. The first 6 rows can be completed for an inventory of all upper- and lower-case letter names. The last 4 rows are randomly ordered and are included to account for variation in student letter name reading.  To determine parallel form construction, a one-way, within- subjects (or repeated measures) ANOVA was conducted to compare the effect of LN Alternate forms (n=5) on the number of correct responses within individuals. There was not a significant effect for forms F(1,146) = 0.71, p=0.40. This indicates that different forms did not result in significantly different mean estimates of correct responses.

2. Number of alternate forms of equal and controlled difficulty: 20

3. Number of items in the item bank for each grade level: There are 20 forms and 100 letters on each form.  

Sensitive to Student Improvement: Convincing Evidence

1. Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average).

Across 149 Kindergarten students, the slope for average weekly improvement (β1Week) was significantly different from 0 (β1Week = 0.61; SE = 0.05). In addition, a significant interaction term between Special Education Status and the slope for weekly improvement was observed. That is β3Special Education Status * Week = -0.55 (SE = 0.14). This significant interaction term suggests that students receiving special education services (n = 17), on average, improved significantly less than regular education students.

End-of-Year Benchmarks: Convincing Evidence

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

a. Specify the end-of-year performance standards:

Kindergarten: 51 Letter names read correct per minute.

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced.

c. Specify the benchmarks:

Low risk (High Risk)

Kindergarten:   Fall = 25 (20)

Winter = 40 (35)

Spring = 51 (48)

d. Basis for specifying these benchmarks?

Criterion-referenced

The primary score for interpretation is number of letters named correctly per minute. Psychometric evidence is provided and supports this value as the primary method of interpretation. Accuracy scores are provided as a supplemental score, such that students who perform at less than 95% accuracy are flagged for the user to consider. Our training materials caution the interpretation of rate-based scores until accuracy is approximately 95%. The goals in the system include number correct per min as the primary index of growth, but also prompt monitoring of the accuracy of student responding. This is designed to help teachers and other users consider multiple aspects of student performance, which includes number correct, errors, rate, and accuracy.

Benchmarks were established for earlyReading to help teachers accurately identify students who are at risk or not at risk for academic failure. These benchmarks were developed from a criterion study examining earlyReading assessment scores in relation to scores on the Group Reading Assessment and Diagnostic Evaluation (GRADE). Measures of diagnostic accuracy were used to determine decision thresholds using criteria related to sensitivity, specificity, and area under the curve (AUC). Specifically, specificity and sensitivity was computed at different cut scores in relation to maximum AUC values.  Decisions for final benchmark percentiles were generated based on maximizing each criterion at each cut score (i.e., when the cut score maximized specificity ≥0.70, and sensitivity was also ≥ 0.70; see Silberglitt & Hintze, 2005). Precedence was given to maximizing specificity. Based on these analyses, the values at the 40th and 15th percentiles were identified as the primary and secondary benchmarks for earlyReading, respectively. These values thus correspond with a prediction of performance at the 40th and 15th percentiles on the GRADE, a nationally normed reading assessment of early reading skills. Performance above the primary benchmark indicates the student is at low risk for long term reading difficulties. Performance between the primary and secondary benchmarks indicates the student is at some risk for long term reading difficulties. Performance below the secondary benchmark indicates the student is at high risk for long term reading difficulties. These risk levels help teachers accurately monitor student progress using the FAST earlyReading measures.

Normative profile:

Representation: Local
Date: 2012-2013
Number of States: 1
Size: ~230
Gender: 55% Male, 45% Female
Region: Upper Midwest
Disability classification: 7% Special Education

Procedure for specifying benchmarks for end-of-year performance levels:

Diagnostic accuracy was used to determine cutpoints, or benchmarks, at the 15th and 40th percentile. These correspond to high risk and low risk, respectively. 

Rates of Improvement Specified: Unconvincing Evidence

1. Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials?

Yes.

a. Specify the growth standards:

The table below provides average weekly growth by percentile and season for Kindergarten students.

Metric: Rate

 

Kindergarten

Percentile

Winter

Spring

90th

12.63

6.63

80th

10.56

5.37

70th

8.87

4.40

60th

7.63

3.50

50th

6.36

2.73

40th

5.24

1.96

30th

4.19

1.13

20th

3.00

0.28

10th

1.64

-0.87

Average

6.80

2.81

SD

4.05

2.78

N

4125

3201

Range

-0.42 - 16.67

-2.97 - 9.23

 
 
b. Basis for specifying minimum acceptable growth:
 
Norm-referenced weekly growth is calculated.
 

Normative profile: 

Representation: Local
Date: 2013-2014
Number of States: 2
Size: The sample was composed of 26,566 total students across two states. However, one of the states did not provide demographic information by the time of this submission. This state’s sample comprised 10,776 total students, or 40.6% of the total two-state sample. This fact is reflected by the percentages labeled “N/A” or "Unknown" below.
Gender: 28.9% Male, 30.5% Female, 40.6% N/A
Region: Upper Midwest
Race/Ethnicity: 38.8% White, 7.8% Black, 4.6% Hispanic, 40.6% Unknown, 1.0% American Indian/Alaska Native, 3.4% Asian/Pacific Islander, 2.0% Other, 1.9% Multiracial.
Disability classification: 49.3% of this sample did not receive special education services; 3.4% of this sample did receive special education services; the special education status was unknown for 47.3% of this sample.
Grade distribution: 57.5% kindergarten; 42.5% first grade.

 

Decision Rules for Changing Instruction: Data Unavailable

Decision Rules for Increasing Goals: Data Unavailable

Improved Student Achievement: Data Unavailable

Improved Teacher Planning Unconvincing Evidence

Description of evidence that teachers’ use of the tool results in improved planning:

In a teacher-user survey, 82% of teachers indicated that FAST assessment results were helpful in making instructional grouping decisions (n = 401).  82% of teachers also indicated that assessment results helped them adjust interventions for students who were at-risk (n = 369).  Finally, a majority of teachers indicated that they look at assessment results at least once per month (66%), and nearly a quarter of teachers indicated that they look at assessment results weekly or even more often (n = 376).