FAST earlyReading Spanish

Area: Letter Sounds Spanish

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

The Formative Assessment System for Teachers (FAST) is a cloud-based suite of assessment and reporting tools that includes earlyReading Spanish. As of 2013-14, there is a $5 per student per year charge for the system. As a cloud-based assessment suite, there are no hardware costs or fees for additional materials. 

Computer and internet access is required for full use.

Testers will require less than 1 hour of training.

Paraprofessionals can administer the test.

earlyReading
43 Main St. SE
Suite 509
Minneapolis, MN 55414
Phone: 612-424-3710

Field tested training manuals are included and should provide all implementation information.

Access to interactive online self-guided teacher training is included at no additional cost. In-person training is available at an additional cost of $300 per hour.

earlyReading is used to monitor student progress in early reading in the early primary grades. Most earlyReading assessments provide information on both the accuracy and rate or efficiency of performance.

The appropriate progress monitoring assessment(s) is/are chosen based on screening performance and are used to diagnose and evaluate skill deficits. Those results help guide instructional and intervention development. It is recommended that Letter Sounds be used for progress monitoring throughout Kindergarten depending on specific student needs.

The Letter Sounds task assesses the student’s ability and efficiency saying the sounds of upper- and lower-case letters in isolation. The examiner and student each have the same page of letters in a random order. That page is organized in a systematic manner that is described in more detail later in this protocol. As the student says the sound of the letters aloud from a paper copy, the examiner marks errors on his/her paper or electronic copy. The resulting score is the number of letter sounds correctly said in one minute.

This tool provides information on students in Spanish. Evidence was based on a sample of Native English-speakers in a Spanish-language immersion school. 

Each earlyReading test takes approximately 1-2 minutes to administer. earlyReading is computer administered and scoring is automated; it does not require any additional time to score.

The Letter Sounds assessment has 20 alternate forms.

Rate is calculated as the number of correct letter sounds read per minute. Raw scores of total and correct letter sounds are also provided. An inventory of known letter sounds can be generated.  

 

Reliability of the Performance Level Score: Convincing Evidence

 

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM or CSEM*

Information (including normative data) / Subjects

range

median

Coefficient Alpha

K 472 -

0.80 for first 7 items

0.95 for first 25 items

0.98 for first 43 items

-

Data was derived from a random sample of students from the FAST database from the 2012-2013 academic year. Approximately 50.6% of the students were female and 49.4% were male. Approximately 68% of students were White, 17.4% Hispanic, 8.9% Black, 4% Multiracial, and 1.7% Asian.

Split-half

K 472 -

0.84 for first 7 items

0.96 for first 25 items

0.98 for first 43 items

-

Data was derived from a random sample of students from the FAST database from the 2012-2013 academic year. Approximately 50.6% of the students were female and 49.4% were male. Approximately 68% of students were White, 17.4% Hispanic, 8.9% Black, 4% Multiracial, and 1.7% Asian.

Delayed Test-Retest

K 325 - 0.52 - Fall-Winter; see subject information in Table 1, below.

Delayed Test-Retest

K 121 - 0.75 - Fall-Spring;see subject information in Table 1, below.

Delayed Test-Retest

K 413 - 0.63 - Winter to Spring

Delayed Test-Retest

K 122 - 0.66 - 2-3 week delay

 

Table 1. Demographics for Study A

Category

Kindergarten

First Grade

Total Percent

Ethnicity

White

151

159

69.50%

Hispanic

35

34

14.50%

Black

19

9

6.30%

Asian

8

6

3.10%

Gender

Male

109

119

51.10%

Female

114

104

48.90%

Special Education Status

General Education

211

214

95.30%

Special Education

12

9

4.70%

 

Reliability of the Slope: Unconvincing Evidence

 

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Split-Half

K

30   0.79 0.28 Reliability for the slope was derived from a sample of approximately 57 1st grade students and 40 Kindergarten students in the FAST system (N = 97). Approximately 46.4% were female, and 53.6% were male. Approximately 54.6% of the sample of students were White, 17.5% were African American, 18.6% were Hispanic, 3.1% were Asian, 4.1% were Multiracial, and 2.1% were American Indian or Alaskan Native. Approximately 90.7% of students were reported as not eligible for Special education services, while 9.3% of students were receiving special education services.

Reliability for the Slope

K

27   0.57   Duration of progress monitoring greater than 10 weeks. See subject information above.

The split-half reliability was calculated by correlating the slope calculated for odd data points with the slope calculated for even data points, using the Spearman Brown correction. Only students with more than 6 weekly progress monitoring data points over 6 weeks were included in the analyses. The multi-level reliability for the slope was calculated based only on students with more than 10 weeks of weekly progress monitoring data.   

Validity of the Performance Level Score: Convincing Evidence

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Predictive

K

Aprenda Preprimario 2

87

-

0.28

Fall to Spring prediction

The majority of students within the immersion schools were White (74%). Students were also identified as African American (9%), Hispanic (12%), Asian (4%) or other (1%).

Predictive

K

Aprenda Preprimario 2

35

-

0.49

Winter to Spring prediction

See above.

Concurrent

K

Aprenda Preprimario 2

87

-

0.68

Data collected in Spring

See above.

 

Predictive Validity of the Slope of Improvement: Unconvincing Evidence

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data)/Subjects

range

median

Predictive Validity of Slope

K

Aprenda Preprimario 2

88

-

0.68

Data collected in fall, winter, and spring;

See subject information under GOM 3

 

Disaggregated Reliability and Validity Data: Unconvincing Evidence

Disaggregated Reliability of the Performance Level Score:

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

Range

Median

Delayed Test Retest

K

11

 

0.60

 

Winter to Spring; Asian

Delayed Test Retest

K

11

 

0.28

 

Fall to Winter; Asian

Delayed Test Retest

K

16

 

0.41

 

Fall to Spring; Asian

Delayed Test Retest

K

28

 

0.78

 

Winter to Spring; African American

Test Retest

K

9

 

0.88

 

2-3 Week Delay; African American

Delayed Test Retest

K

44

 

0.62

 

Winter to Spring; Hispanic

Delayed Test Retest

K

44

 

0.58

 

Fall to Winter; Hispanic

Delayed Test Retest

K

131

 

0.41

 

Fall to Spring; Hispanic

Delayed Test Retest

K

14

 

0.64

 

Winter to Spring; Multiracial

Delayed Test Retest

K

15

 

0.70

 

Fall to Winter; Multiracial

Delayed Test Retest

K

23

 

0.66

 

Fall to Spring; Multiracial

Test Retest

K

10

 

0.90

 

2-3 Week Delay; Multiracial

Delayed Test Retest

K

223

 

0.68

 

Winter to Spring; White

Delayed Test Retest

K

224

 

0.48

 

Fall to Winter; White

Delayed Test Retest

K

335

 

0.42

 

Fall to Spring; White

Test Retest

K

86

 

0.67

 

2-3 Week Delay; White

Disaggregated Reliability of the Slope:

 

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

Range

Median

Reliability for the Slope

K

13

  0.47   White

Reliability for the Slope

K

7

  0.54   African American

Disaggreagted Validity of the Performance Level Score:

The following disaggregated aReading validity coefficients were derived from a sample of approximately 17,137 Kindergarten students in the FAST system. Approximately 32.2% were female, and 34.7% were male, with approximately 33% of the sample not reporting their gender. Approximately 42.6% of the sample of students were White, 8.5% were African American, 4.9% were Hispanic, 3.5% were Asian, 4.4% were recorded as “Other”, 1.7% were Multiracial, 1.2% were American Indian or Alaska Native, and 0.1% were Native Hawaiian or Other Pacific Islander. Approximately 33% of the sample did not report ethnicity/race. Approximately 55.5% of students were reported as not eligible for special education services, while 3.5% of students were receiving special education services. Approximately 40.9% of students did not report their special education status.

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

Range

Median

Concurrent

K aReading 17   0.48 Fall; Asian

Predictive

K aReading 17   0.40 Fall to Spring; Asian

Concurrent

K aReading 10   0.48 Spring; Multiracial

 

Alternate Forms: Unconvincing Evidence

1. Evidence that alternate forms are of equal and controlled difficulty:
To determine parallel form construction, a one-way, within-subjects (or repeated measures) ANOVA was conducted to compare the effect of alternate forms (n = 10) across thirty-seven K students on the number of correct, de-trended, responses within individuals. There was a non-significant effect for form F(9, 324) = 0.64, p = 0.76. This indicates that different forms did not result in significantly different de-trended mean estimates of correct responses
 
2. Number of alternate forms of equal and controlled difficulty: 20
 

Sensitive to Student Improvement: Unconvincing Evidence

Describe evidence that the monitoring system produces data that are sensitive to student improvement:

Across 30 Kindergarten students, the slope for average weekly improvement (β1Week) was significantly different than 0 (β1Week = 0.84; SE = 0.06). 

End-of-Year Benchmarks: Convincing Evidence

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes.

a. Specify the end-of-year performance standards:

Kindergarten: 30 Letter Sounds read correct per minute. 

b. Basis for specifying minimum acceptable end-of-year performance:

Criterion-referenced

c. Specify the benchmarks:

 Low risk (High risk)

Kindergarten:    Fall =  8 (7)

      Winter = 15 (14)

      Spring = 30 (25)

b. Basis for specifying benchmarks:

Criterion-referenced

The primary score for interpretation is number letter sounds read correct per minute. Psychometric evidence is provided and supports number correct or correct per minute as the primary methods of interpretation. Accuracy scores are provided as a supplemental score, such that students who perform at less than 95% accuracy are flagged for the user to consider. Our training materials caution the interpretation of rate-based scores until accuracy is approximately 95%. The goals in the system include number correct and number correct per min as the primary index of growth, but also prompt monitoring of the accuracy of student responding. This is designed to help teachers and other users consider multiple aspects of student performance, which includes number correct, errors, rate, and accuracy.

Benchmarks were established for earlyReading to help teachers accurately identify students who are at risk or not at risk for academic failure. These benchmarks were developed from a criterion study examining earlyReading assessment scores in relation to scores on the Aprenda 3 using the Preprimario 2 for Kindergarten students and Primario 1 for students in Grade 1. Measures of diagnostic accuracy were used to determine decision thresholds using criteria related to sensitivity, specificity, and area under the curve (AUC). Specifically, specificity and sensitivity was computed at different cut scores in relation to maximum AUC values.  Decisions for final benchmark percentiles were generated based on maximizing each criterion at each cut score (i.e., when the cut score maximized specificity ≥ 0.70, and sensitivity was also ≥ 0.70; see Hintze & Silberglitt, 2005). Precedence was given to maximizing specificity. Based on these analyses, the values at the 40th and 15th percentiles were identified as the primary and secondary benchmarks for earlyReading, respectively. These values thus correspond with a prediction of performance at the 40th and 15th percentiles on the Aprenda, a nationally normed reading assessment of Spanish early reading skills. Performance above the primary benchmark indicates the student is at low risk for long term reading difficulties. Performance between the primary and secondary benchmarks indicates the student is at some risk for long term reading difficulties. Performance below the secondary benchmark indicates the student is at high risk for long term reading difficulties. These risk levels help teachers accurately monitor student progress using the FAST earlyReading measures. 

Normative profile:

Representation: Local
Date: 2012-2013
Number of States: 1
Size: 176
Gender: Unknown
Region: Upper Midwest
Disability classification: 6%

Procedure for specifying benchmarks: Diagnostic accuracy was used to determine cutpoints, or benchmarks, at the 15th and 40th percentile. These correspond to high risk and low risk, respectively. 

Rates of Improvement Specified: Unconvincing Evidence

1. Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials?

Yes.

a. Specify the growth standards:

The table below provides average weekly growth by percentile and season for students in Kindergarten.

Metric: Rate

 

Kindergarten

Percentile

Winter

Spring

90th

7.04

7.76

80th

5.73

6.11

70th

4.47

4.78

60th

3.67

4.05

50th

2.83

3.44

40th

2.31

2.97

30th

1.85

2.19

20th

0.94

1.49

10th

0.00

0.58

Average

3.21

3.84

SD

2.57

2.73

N

220

309

Range

-1.85 - 9.06

-0.98 - 11.67

b. Basis for specifying minimum acceptable growth:

Norm-referenced weekly growth is calculated

Normative profile:

Representation: Local
Date: 2013-2014
Number of States: 1
Size: 1,372 total students were included in this sample.
Gender: 52.9% Male, 47.1% Female
Region: Midwest
Race/Ethnicity: 65.3% White, 7.6% Black, 10.4% Hispanic, 10.8% Other, 2.8% Asian/Pacific Islander, 1.8% Multiracial, 1.3% American Indian/Alaska Native
Disability classification: 77% of this sample did not receive special education services. 8.7% of this sample did receive special education services. The special education status of the remaining 14.3% of the sample is unknown.
Grade Distribution: 38.9% kindergarten; 61.1% first grade.

Decision Rules for Changing Instruction: Data Unavailable

Decision Rules for Increasing Goals: Data Unavailable

Improved Student Achievement: Unconvincing Evidence

Design: Random assignment not used.

Unit of assignment: 6 Teachers

Unit of analysis: 6 Teachers

Duration of product implementation: 1 year

Describe analysis: Teachers were surveyed whether or not they agreed that use of this measure improved student achievement. 

Fidelity:

Description of when and how fidelity of treatment information was obtained: Survey results were collected in the Spring after 1 year of implementation. Teachers were required to use the measure based on district requirements.

Results:

Results of the study: 84% of teachers agreed that the tool improved student achievement. 

Improved Teacher Planning Unconvincing Evidence

Describe evidence that teachers’ use of the tool results in improved planning:

In a teacher-user survey, 82% of teachers indicated that FAST assessment results were helpful in making instructional grouping decisions (n = 401).  82% of teachers also indicated that assessment results helped them adjust interventions for students who were at-risk (n = 369).  Finally, a majority of teachers indicated that they look at assessment results at least once per month (66%), and nearly a quarter of teachers indicated that they look at assessment results weekly or even more often (n = 376).