FAST earlyMath

Match Quantity

Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

The Formative Assessment System for Teachers (FAST) is a cloud-based suite of assessment and reporting tools that includes earlyMath. As of 2013-14, there is a $5 per student per year charge for the system. As a cloud-based assessment suite, there are no hardware costs or fees for additional materials. 

Computer and internet access is required for full use.

Testers will require less than 1 hour of training.

Paraprofessionals can administer the test.

FastBridge Learning
520 Nicollet Mall
Suite 910
Minneapolis, MN 55402-1057
Phone: 612-254-2534

Field tested training manuals are included and should provide all implementation information.

Access to interactive online self-guided teacher training is included at no additional cost. In-person training is available at an additional cost of $300 per hour.

earlyMath is used to monitor student progress in early mathematics in the early primary grades (typically K to 1st). Most assessments provide information on both the accuracy and rate or efficiency of performance.

The appropriate progress monitoring assessment(s) is/are chosen based on screening performance and are used to diagnose and evaluate skill deficits. Those results help guide instructional and intervention development. It is recommended that Match Quantity be used for progress monitoring throughout Grade 1 and in Kindergarten depending on specific student needs.

The Match Quantity test assesses the student’s ability to correctly match a quantity of dots to a numeral, given a choice of four numerals. An understanding of the association between real world quantities and formal symbols is a key developmental marker in early numeracy (Griffin, 2008). As the student points to the numeral that corresponds to the number of dots presented, the examiner marks any errors on his/her copy of the score form. The resulting score is the total number of items responded to correctly.

Each earlyMath test takes approximately 1-4 minutes to administer; additional time required for scoring is 1 minute or less.

The Match Quantity assessment has 20 alternate forms.

Rate is calculated as the number of correct items per minute. Raw scores of items correct are also provided.

 

Reliability of the Performance Level Score

GradeK
RatingFull bubble

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data) / Subjects

range

median

Test-Retest

K

36

-

0.76

-

 

Interrater

K

45

0.58-1.00

0.93

-

 

Alternate Form

K

34-38

0.44-0.68

0.61

-

 

Coefficient alpha*

K

144

-

0.74 for first 10 items

0.76 for first 13 items

0.80 for first 17 items

-

 

Split-Half*

K

144

-

0.76 for first 10 items

0.78 for first 13 items

0.87 for first 17 items

-

 

*Internal consistency measures, such as coefficient alpha or split-half reliability, are inflated on timed measures because of the high percentage of incomplete items at the end of the assessment, which are those for which examinees did not respond (Crocker & Algina, 1986). As a solution to both illustrate the potential inflation and also reduce it, estimates of internal consistency (reliability) were run on the items attempted by approximately 16% of students, the items completed by 50% of students, and the items completed by approximately 84% of students. Items not completed were coded as incorrect.

Reliability of the Slope

GradeK
Ratingdash

Validity of the Performance Level Score

GradeK
RatingFull bubble

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient

Information (including normative data) / Subjects

range

median

Concurrent

K

Measures of Academic Progress for Primary Grades – Math (MAP)

220

-

0.46

Data collected in Winter

Predictive

K

MAP

215

-

0.57

Fall to Winter prediction

Predictive

K

GMADE composite Level R

142

-

0.44

Fall to Spring prediction

Predictive

K

GMADE composite Level R

144

-

0.34

Winter to Spring prediction

Concurrent

K

GMADE composite Level R

150

-

0.47

Data collected in Spring

 

Predictive Validity of the Slope of Improvement

GradeK
Ratingdash

Bias Analysis Conducted

GradeK
RatingNo

Disaggregated Reliability and Validity Data

GradeK
RatingNo

Alternate Forms

GradeK
RatingEmpty bubble

1. Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

There are 20 items, and earlyMath MQ is a 1 minute timed assessment. Each item is organized with an array of blue dots (3/4 in. in diameter) on the left side of the student stimulus page, and a 2 x 2 matrix of numerals on the right side of each page. Dot arrays between 1 and 10 are used for this assessment. Arrays containing 1 to 6 dots have both scattered and ordered configurations, while arrays containing 7 – 10 dots are presented in an ordered configuration.

The matrix containing four numerals always contains the target number and three plausible distractors. Plausible distractors are defined as meeting one of the following criteria: one or two numerals greater than the target number, one or two numerals less than the target number, a numeral that appeared in the ordered configuration (i.e., in a 3 x 3 ordered configuration of dots, the target number is 9, and a plausible distractor would be 3 or 6 because of the way the dots are presented) The target number was carefully placed to ensure that it did not appear in the same section of the matrix for each item.

To determine parallel form construction, a one-way, within-subjects (or repeated measures) ANOVA was conducted to compare the effect of alternate forms (n = 5) across 41 students on the number of mean correct responses within individuals. There was a non-significant effect for form F(4, 114) = 1.66, p=0.16. This indicates that different forms did not result in significantly different mean correct responses.

2. Number of alternate forms of equal and controlled difficulty:

20

Rates of Improvement Specified

GradeK
RatingEmpty bubble

Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in manual or published materials?

Pending Fall 2014

a. Specify the growth standards:

Percentile

Weekly Growth

25th

0.03

50th

0.12

75th

0.21

b. Basis for specifying minimum acceptable growth:

Norm-referenced.

Normative profile:

Representation: Local
Date: 2013-2014
Number of States: 1
Size: 497
Gender: 51% male, 49% female
SES: 41% eligible for free or reduced lunch
Race/Ethnicity: 84% White, 5% Black, 5% American Indian/Alaska Native, 3% Hispanic
Disability classification: 10% eligible for special education services

End-of-Year Benchmarks

GradeK
RatingEmpty bubble

1. Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Pending Fall 2014

Sensitive to Student Improvement

GradeK
RatingEmpty bubble

Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average):

Across 497 Kindergarten students, the slope for average weekly improvement (β1Week) was significantly different than 0 (β1Week = 0.12; SE = 0.01). 

Decision Rules for Changing Instruction

GradeK
Ratingdash

Decision Rules for Increasing Goals

GradeK
Ratingdash

Improved Student Achievement

GradeK
Ratingdash

Improved Teacher Planning

GradeK
RatingEmpty bubble