You are here
Home ›FAST earlyMath
Number Sequence
Cost  Technology, Human Resources, and Accommodations for Special Needs  Service and Support  Purpose and Other Implementation Information  Usage and Reporting 

The Formative Assessment System for Teachers (FAST) is a cloudbased suite of assessment and reporting tools that includes earlyMath. As of 201314, there is a $5 per student per year charge for the system. As a cloudbased assessment suite, there are no hardware costs or fees for additional materials. 
Computer and internet access is required for full use. Testers will require less than 1 hour of training. Paraprofessionals can administer the test. 
FastBridge Learning
520 Nicollet Mall
Suite 910
Minneapolis, MN 554021057
Website: http://www.fastbridge.org/
Field tested training manuals are included and should provide all implementation information. Access to interactive online selfguided teacher training is included at no additional cost. Inperson training is available at an additional cost of $300 per hour. 
earlyMath is used to monitor student progress in early mathematics in the early primary grades (typically K to 1st). Most assessments provide information on both the accuracy and rate or efficiency of performance. The appropriate progress monitoring assessment(s) is/are chosen based on screening performance and are used to diagnose and evaluate skill deficits. Those results help guide instructional and intervention development. It is recommended that Number Sequence be used for progress monitoring throughout Kindergarten depending on specific student needs. The Number Sequence test assesses oral counting and comprehension of the mental number line. The test is completely verbal, and no student stimulus materials are used. As the student responds to each item, the examiner marks any errors on his/her score form. There are 13 items separated by the type of question. Types of items include: Count Sequence (measures the student’s ability to count forward from 1 to 31, and also counting backward), Number After (items of various difficulty level which assess the understanding of “number after,” “one more than,” and “two more than”), Number Before (items of various difficulty level which assess the understanding of “number before,” “one less than,” and “two less than”), and Number Between (measures the student’s understanding of the concept “between”). The resulting score is the number of items responded to correctly out of 13. 
Each earlyMath test takes approximately 14 minutes to administer; additional time required for scoring is 1 minute or less. The Numeral Sequence assessment has 20 alternate forms. The raw score is the total number of items responded to correctly. 
Reliability of the Performance Level Score
Grade  K 

Rating 
Type of Reliability 
Age or Grade 
n (range) 
Coefficient 
SEM 
Information (including normative data) / Subjects 

range 
median 

TestRetest 
K 
35 
 
0.80 
 
10% Black, 8% Hispanic, 82% White; 15% Free and reduced lunch. 
Interrater 
K 
45 
0.85 – 1.00 
1.00 
 
A random sample of cases were selected from the 20132014 school year. 
Alternate Form 
K 
39 – 41 
0.67  0.82 
0.75 
 
5% Asian, 23% Black, 11% Hispanic, 3% Multiracial, 58% White; 6% IEP eligible. 
Coefficient alpha* 
K 
598 
 
0.76 
 
A random sample of cases were selected from the 20132014 school year. 
SplitHalf* 
K 
598 
 
0.87 
 
The same sample used to calculate coefficient alpha was used from the 20132014 school year. 
*Internal consistency measures, such as coefficient alpha or splithalf reliability, are inflated on timed measures because of the high percentage of incomplete items at the end of the assessment, which are those for which examinees did not respond (Crocker & Algina, 1986). As a solution to both illustrate the potential inflation and also reduce it, estimates of internal consistency (reliability) were run on the items attempted by approximately 16% of students, the items completed by 50% of students, and the items completed by approximately 84% of students. Items not completed were coded as incorrect.
Reliability of the Slope
Grade  K 

Rating 
Validity of the Performance Level Score
Grade  K 

Rating 
Type of Validity 
Age or Grade 
Test or Criterion 
n (range) 
Coefficient 
Information (including normative data) / Subjects 

range 
median 

Concurrent 
K 
Measures of Academic Progress for Primary Grades – Math (MAP) 
220 
 
0.54 
Data collected in Winter. 1% American Indian, 2% Asian, 4% Black, 2% Hispanic, 91% White; 30% Free and reduced lunch; 12% IEP eligible. 
Predictive 
K 
MAP 
215 
 
0.70 
Fall to Winter Prediction. See above. 
Predictive 
K 
GMADE composite Level R 
142 
 
0.49 
Fall to Spring prediction. 3% American Indian, 4% Asian, 8% Black, 6% Hispanic, 80% White; 29% Free and reduced lunch; 8% IEP eligible. 
Predictive 
K 
GMADE composite Level R 
144 
 
0.48 
Winter to Spring prediction. See above. 
Concurrent 
K 
GMADE composite Level R 
150 
 
0.54 
Data collected in Spring. See above. 
Predictive Validity of the Slope of Improvement
Grade  K 

Rating 
Bias Analysis Conducted
Grade  K 

Rating  No 
Disaggregated Reliability and Validity Data
Grade  K 

Rating  No 
Alternate Forms
Grade  K 

Rating 
1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:
Forms were constructed with 13 items separated by the type of question asked. Types of items include: Count Sequence, Number After, Number Before, and Number Between.
Count Sequence. The Count Sequence test has two counting forward items and two counting backward items. In each item, the examiner starts the count sequence by saying a consecutive sequence of three numbers (e.g., 2, 3, 4). Item 1 requires the student to count forward to 10 starting from 1, and item 2 requires the student to count forward to 31 starting from a number other than 1, and points are awarded when the student reaches 15, 20, and 31 without error. Item 3 requires the student to count back three numbers from a single digit number between, and Item 4 requires the student to count back five numbers from a number between 8 and 31.
Number After. The Number After test has three items, each one is more difficult than the previous item. The three categories of items are as follows:
 The first prompt is “What number comes after x?”
 The second prompt is “What is one more than x?”
 The third prompt is “What is two more than x?”
Number Before. The Number Before test also has three items, each one more difficult than the previous item. The three categories of items are as follows:
 The first prompt is “What number comes before x?”
 The second prompt is “What is one less than x?”
 The third prompt is “What is two less than x?”
Number Between. The Number Between test has one item that uses the prompt “What number is between x and y?”
Numbers included in the assessment were chosen strategically with the consultation of content experts to measure important skills. These guidelines were used when creating each progress monitoring form.
To determine parallel form construction, a oneway, withinsubjects (or repeated measures) ANOVA was conducted to compare the effect of alternate forms (n = 5) across 41 students on the number of mean correct responses within individuals. There was a nonsignificant effect for form F(4, 99) = 0.32, p= 0.86. This indicates that different forms did not result in significantly different mean correct responses.
2. Number of alternate forms of equal and controlled difficulty:
20
Rates of Improvement Specified
Grade  K 

Rating 
Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials?
Pending Fall 2014
a. Specify the growth standards:
Percentile 
Weekly Growth 
25^{th} 
0.07 
50^{th} 
0.13 
75^{th} 
0.18 
b. Basis for specifying minimum acceptable growth:
Normreferenced
Normative profile:
EndofYear Benchmarks
Grade  K 

Rating 
1. Are benchmarks for minimum acceptable endofyear performance specified in your manual or published materials?
Pending Fall 2014
Sensitive to Student Improvement
Grade  K 

Rating 
Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average):
Across 497 Kindergarten students, the slope for average weekly improvement (β_{1}Week) was significantly different than 0 (β_{1}Week = 0.13; SE = 0.00).
Decision Rules for Changing Instruction
Grade  K 

Rating 
Decision Rules for Increasing Goals
Grade  K 

Rating 
Improved Student Achievement
Grade  K 

Rating 
Improved Teacher Planning
Grade  K 

Rating 