aimswebPlus Reading

Letter Word Sound Fluency

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

aimswebPlus™ is a subscription-based tool. There are three subscription types available for customers:

aimswebPlus Complete is $8.50 per student and includes all measures.

aimswebPlus Reading is $6.50 per student and includes early literacy and reading measures.

aimswebPlus Math is $6.50 per student and includes early numeracy and math measures.

Test accommodations that are documented in a student’s Individual Education Plan (IEP) are permitted with aimswebPlus. However, not all measures allow for accommodations.  Letter Word Sounds Fluency is a test that employs a strict time limit to generate rate-based scores. As such, valid interpretation of national norms, which are an essential aspect of decision-making during benchmark testing, depend on strict adherence to the standard administration procedures.

These accommodations are allowed for Letter Word Sounds Fluency: enlarging test forms and modifying the environment (e.g., special lighting, adaptive furniture). 

NCS Pearson, Inc.
Phone: (866) 313-6194

www.aimsweb.com

www.aimswebplus.com

Training manuals are included and should provide all implementation information.

Pearson provides phone- and email-based ongoing technical support, as well as a user group forum that facilitates the asking and answering of questions.

aimswebPlus is a brief and valid assessment system for monitoring reading and math skills. Normative data were collected in 2013-14 on a combination of fluency measures that are sensitive to growth as well as new standards-based assessments of classroom skills. The resulting scores and reports include Letter Words Sounds Fluency, and they inform instruction and help improve student performance in Kindergarten.

The student is presented with 45 letters and 10 “real” three-letter words, each presented as an initial sound, a consonant-vowel syllable, and then the consonant-vowel-consonant word these word parts form. Ten unique progress monitoring (PM) forms are available; PM testing conducted at teacher-determined intervals. 

While the Kindergarten and Grade 1 measures are administered individually, most of the Grades 2 through 8 measures can be taken online by entire classes. Once testing is complete, summary or detailed reports for students, classrooms, and districts can be immediately generated, and the math and reading composite scores can be used to estimate the risk to students or classes for meeting end-of-year goals. aimswebPlus reports also offer score interpretation information based on foundational skills for college and career readiness, learning standards, and other guidelines, Lexile® and Quantile® information, and recommendations for appropriate teaching resources.

Raw score and percentiles scores (based on grade norms) are provided. Local norms are also available.

The raw score is calculated by tallying the letters or word sounds made correctly in 60 seconds. Letters or words not reached within 60 seconds are not considered “incorrect”; they are ignored for the purpose of reporting the number of correct responses.

 

Reliability of the Performance Level Score

GradeK
RatingFull bubble

Reliability Coefficients for Letter Word Sound Fluency, Kindergarten

Type of Reliability

Grade

n (range)

Coefficient Range

Coefficient Median

SEM

Alternate form

1

90 - 217

0.84 – 0.90

0.87

11.57

 

Reliability of the Slope

GradeK
Ratingdash

Validity of the Performance Level Score

GradeK
RatingFull bubble

aimswebPlus Math LWSF Score Predictive Validity Coefficient, by Grade and Criterion Measure

     

Concurrent

Gender

Race

ELL

% Free/Reduced Lunch

Criterion

Grade

N

Unadj

Adj1

F

M

B

H

O

W

Yes

68 - 100

34 - 67

0 - 33

WRF

k (w)

975

0.62

0.62

50

50

14

25

10

51

9

32

33

36

1 correlation adjusted for range restriction

aimswebPlus Math LWSF Score Concurrent Validity Coefficient, by Grade and Criterion Measure

     

Predictive

Gender

Race

ELL

% Free/Reduced Lunch

Criterion

Grade

N

Unadj

Adj1

F

M

B

H

O

W

Yes

68 - 100

34 - 67

0 - 33

WRF

k (w)

975

0.55

0.55

50

50

14

25

10

51

9

32

33

36

1 correlation adjusted for range restriction

aimswebPlus Word Reading Fluency

Overview: The student reads read words aloud for 1 minute.

Test Format:  individually-administered, timed

Test Content: The student is presented with two pages of word lists totaling 99 words.

20 unique progress monitoring forms; PM testing conducted at teacher-determined intervals

Score: number of words read correctly in 1 minute

Time limit: 1 minute

Predictive Validity of the Slope of Improvement

GradeK
RatingEmpty bubble

The predictive validity of the Letter Word Sounds Fluency (LWSF) slope was assessed using the correlation of the annual LWSF ROI (LWSFOI) with a aimswebPlus Word Reading Fluency (WRFSpring) scores, after controlling for fall LWSF (LWSFFall) performance. The model used is shown here:

〖WRF〗_spring= Intercept+ (β_1 )×〖LWSF〗_Fall+ (β_2 )×〖LWSF〗_ROI+ ε

A positive and statistically significant β_2 indicates that for a given fall LWSF score, students with higher LWSF ROIs had higher spring Criterion scores. 

See GOM 3 for description of the criterion measure Word Reading Fluency.

Predictive validity of the fall to spring rate of improvement, Letter Naming Fluency

Measure

n

b2

SE

T

p

LWSF

925

6.0

0.71

8.4

<0.01

 

Bias Analysis Conducted

GradeK
RatingNo

Disaggregated Reliability and Validity Data

GradeK
RatingNo

Alternate Forms

GradeK
RatingFull bubble

What is the number of alternate forms of equal and controlled difficulty? 10

Note. There are 10 LWSF progress monitoring forms because this measure is only administered during winter and spring screening periods. As such, there is only one progress monitoring season (winter to spring) and thus requires fewer progress monitoring forms than measures given during all three screening periods.

To maximize the equivalency of the alternate test forms used for progress monitoring, each form was developed from the same set of test specifications (i.e., test blueprint). Each form contained 45 individual letters and 15 CVC words.

Fourteen alternate forms were developed and administered to kindergarten students from across the U.S. For this study, each student completed four LWSF forms. The 14 forms were divided among 20 sets. Each set included the winter benchmark form as an anchor form, and blocks of three additional forms drawn from the 14 alternate PM forms. Each block of three was assigned to two of the 20 sets, with the order of the first and third forms reversed across the two sets they appeared in. In each set, the anchor form was always the first form administered. For example, Set 1A = Winter, PM1, PM2, PM3; while Set 1B = Winter, PM3, PM2, PM1. This approach was used to control for order effects and sampling variation.

Sets were randomly assigned to students by spiraling sets within grade at each testing site.

Form equivalency is further evaluated by comparing the mean difficulty of each form. Two methods are used here to describe comparability of form difficulty: effect size and percentage of total score variance attributable to form.

The effect size (ES) for each form is the mean of the form minus the weighted average across all forms divided by the pooled SD:

 ES=((x_i-X ̅ ))/〖SD〗_pooled

Effect sizes less than 0.30 are considered small. Most effect sizes were less than 0.10 for the LWSF forms.

The percentage of the total score variance attributable to test form was computed by dividing the between form variance by the pooled within form variance plus between form variance. The percentage of test score variance attributable to forms was less than 1%. 

Measure

Form

n

Mean

SD

ES

LWSF

4

124

43.8

14.80

0.04

LWSF

5

105

44.4

14.30

0.08

LWSF

6

90

42.5

14.10

0.05

LWSF

7

217

44.2

15.70

0.06

LWSF

8

90

40.9

15.00

0.16

LWSF

9

104

43.5

14.60

0.02

LWSF

10

90

41.4

14.80

0.13

LWSF

11

105

43.8

14.00

0.04

LWSF

12

124

44.8

15.10

0.10

LWSF

13

105

43.4

14.80

0.01

   

Mean

43.3

14.7

0.07

   

SD

1.28

0.51

 

 

 

 

Percentage variance:  0.61%

 

 

Rates of Improvement Specified

GradeK
RatingEmpty bubble

Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in your manual or published materials?

Yes

Specify the growth standards:

aimswebPlus provides student growth percentiles (SGP) by grade and initial (fall and winter) performance level for establishing growth standards. An SGP indicates the percentage of students in the national sample whose seasonal (or annual) rate of improvement (ROI) fell at or below a specified ROI. Separate SGP distributions are computed for each of five levels of initial (fall or winter) performance to control for differences in growth rate by initital performance level. 

When setting a performance goal for a student, the system automatically generates feedback as to the appropriateness of the goal. An SGP < 50 is considered Insufficient; an SGP between 50 and 85 is considered Closes the Gap; an SGP between 85 and 97 is considered Ambitious; and an SGP > 97 is considered Overly Ambitious. aimswebPlus recommends setting performance goals that represents rates of growth between the 85th and 97th SGP. However, the user ultimately determines what growth rate is required on an individual basis.

What is the basis for specifying minimum acceptable growth?

Norm-referenced

If norm-referenced, describe the normative profile.

Demographic Characteristics of the aimswebPlus Reading Norm Sample, Kindergarten

   

Sex

Race

SES (F/R lunch)

Subject

Grade

F

M

B

H

O

W

Low

Mod

High

Reading

K

0.50

0.50

0.13

0.25

0.10

0.51

0.32

0.32

0.36

Representation: National

Date: 2013–2014

Number of States: 10

Regions: 4

Gender: 50% male, 50% female

SES: Low, middle, high, free and reduced lunch

ELL: 10%

Please describe other procedures for specifying adequate growth:

To get the most value from progress monitoring, aimswebPlus recommends the following: (1) establish a time frame, (2) determine the level of performance expected, and (3) determine the criterion for success.  Typical time frames include the duration of the intervention or the end of the school year. An annual time frame is typically used when IEP goals are written for students who are receiving special education.  For example, aimswebPlus goals can be written as follows: In 34 weeks, the student will compare numbers and answer computational problems to earn of score of 30 points on Grade 4 Number Sense Fluency forms.

The criterion for success may be set according to standards, local norms, national norms, or a normative rate of improvement (ROI). For example, the team may want to compare a student’s performance to district/local norms, which compares the student’s score to his or her peers in the context of daily learning.

For normative ROIs, aimswebPlus uses student growth percentiles to describe these normative rates of improvement. Within the aimswebPlus software, the user enters the goal date and moves a digital slider to the desired ROI.  As the slider moves, it provides feedback about the strength of the ROI: Insufficient, Closes the Gap, Ambitious, or Overly Ambitious. Users are encouraged to use the Ambitious (85th – 97th SGP) for students in need of intensive intervention.

End-of-Year Benchmarks

GradeK
RatingEmpty bubble

Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?

Yes

Specify the end-of-year performance standards:

aimswebPlus allows users to select from a range of end-of-year targets the one that is most appropriate for their instructional needs. The targets are based on spring reading or math composite national percentiles by grade level. Twelve national percentile targets ranging from the 15th through the 70th percentile are provided, in increments of 5.

For Grades 3 through 8, it is recommended that users select the spring percentile that most closely aligns to the overall percentage of students below proficient on state reading/math tests. This is the percentage considered at risk. For example, if the percentage of students below proficient on the state test is 20%, the recommended end-of-year benchmark is the 20th percentile. Likewise, if the percentage of students below proficient on the state test is 60%, the recommended end-of-year benchmark is the 60th percentile.

Because passing rates on state assessments are fairly consistent across grades, the percentage of students at risk in Kindergarten through Grade 2 is likely to be very similar to the percentage at risk in Grade 3. As such, aimswebPlus recommends using the percentage of students below proficient on the Grade 3 state reading/math tests as the end-of-year benchmark for students in Kindergarten through Grade 2. For example, if the percentage of students below proficient on the state test in Grade 3 is 30%, the recommended end-of-year benchmark for students in Kindergarten through Grade 2 is the 30th percentile.

If these percentages are not available, aimswebPlus recommends using the 25th percentile as the end-of-year benchmark.

Fall and winter benchmark cut scores are derived automatically by the aimswebPlus system. The cut scores are based on empirical research of the relationship between fall/winter scores and spring benchmarks. Two cut-scores are provided: one corresponding to a 50% probability of exceeding the spring benchmark, and the other corresponding to an 80% probability of exceeding the spring benchmark. Fall or winter scores above the 80% probability cut score are deemed low risk; fall or winter scores between the 50% and 80% cut scores are deemed moderate risk; and fall or winter scores below the 50% probability cut score are deemed high risk. These three levels correspond to the RTI tiers reported in the aimswebPlus system.

What is the basis for specifying minimum acceptable end-of-year performance?

Norm-referenced

Specify the benchmarks:

Percentage of students below proficient level on state test.

What is the basis for specifying these benchmarks?

Norm-referenced

If norm-referenced, describe the normative profile:

Demographic Characteristics of the aimswebPlus Reading Norm Sample, Kindergarten

   

Sex

Race

SES (F/R lunch)

Subject

Grade

F

M

B

H

O

W

Low

Mod

High

Reading

K

0.50

0.50

0.13

0.25

0.10

0.51

0.32

0.32

0.36

Representation: National

Date: 2013–2014

Number of States: 10

Regions: 4

Gender: 50% male, 50% female

SES: Low, middle, high, free and reduced lunch

ELL: 10%

Sensitive to Student Improvement

GradeK
RatingFull bubble

Describe evidence that the monitoring system produces data that are sensitive to student improvement (i.e., when student learning actually occurs, student performance on the monitoring tool increases on average).

Sensitivity to improvement was assessed by demonstrating that annual performance gains were statistically significant and moderate in size as expressed in fall standard deviation units. A gain expressed in SD units that exceeds 0.3 can be considered moderate (see Cohen, J., 1988. Statistical Power Analysis for the Behavioral Sciences (Second Edition). Lawrence Erlbaum Associates.)

LWSF fall and spring benchmark means, SDs, paired-sample t, and annual gain represented as winter standard deviation units

 

Mean

SD

N

Paired t

p

Gain/SD

Measure

Fall

Spring

Fall

Spring

LWSF

32.1

42.0

13.46

12.38

2000

41.1

<.01

0.74

 

Decision Rules for Changing Instruction

GradeK
RatingFull bubble

Does your manual or published materials specify validated decision rules for when changes to instruction need to be made?

Yes

Specify the decision rules:

aimswebPlus applies a statistical procedure to the student’s progress monitoring scores in order to provide empirically-based guidance about whether the student is likely to meet, fall short of, or exceed his/her goal. The calculation procedure (presented below) is fully described in the aimsweb Progress Monitoring Guide (Pearson, 2012). aimswebPlus users will not have to do any calculations—the online system does this automatically. The decision rule is based on a 75% confidence interval for the student’s predicted score at the goal date. This confidence interval is student-specific and takes into account the number and variability of progress monitoring scores and the duration of monitoring. Starting at the sixth week of monitoring (when there are at least four monitoring scores), the aimswebPlus report following each progress monitoring administration includes one of the following statements:

A. “The student is projected to not reach the goal.” This statement appears if the confidence interval is completely below the goal score.

B. “The student is projected to exceed the goal.” This statement appears if the confidence interval is completely above the goal score.

C. “The student is projected to be near the goal. The projected score at the goal date is between X and Y” (where X and Y are the bottom and top of the confidence interval). This statement appears if the confidence interval includes the goal score.

If Statement A appears, the user has a sound basis for deciding that the current intervention is not sufficient and a change to instruction should be made. If Statement B appears, there is an empirical basis for deciding that the goal is not sufficiently challenging and should be increased. If Statement C appears, the student’s progress is not clearly different from the aimline, so there is not a compelling reason to change the intervention or the goal; however, the presentation of the confidence-interval range enables the user to see whether the goal is near the upper limit or lower limit of the range, which would signal that the student’s progress is trending below or above the goal.

A 75% confidence interval was chosen for this application because it balances the costs of the two types of decision errors. Incorrectly deciding that the goal will not be reached (when in truth it will be reached) has a moderate cost: an intervention that is working will be replaced by a different intervention. Incorrectly deciding that the goal may be reached (when in truth it will not be reached) also has a moderate cost: an ineffective intervention will be continued rather than being replaced. Because both kinds of decision errors have costs, it is appropriate to use a modest confidence level.

Calculation of the 75% confidence interval for the score at the goal date:

Calculate the trend line. This is the ordinary least-squares regression line through the student’s monitoring scores.

Calculate the projected score at the goal date. This is the value of the trend line at the goal date.

Calculate the standard error of estimate (SEE) of the projected score at the goal date, using the following formula:

〖SEE〗_(predicted score)= √((∑_i^k▒(y_i-y ́_i )^2 )/(k-2))×√(1+1/k+(GW-(∑_1^k▒w_i )/k)^2/(∑_i^k▒(w_i-(∑_1^k▒w_i )/k)^2 ))

where k = number of completed monitoring administrations, w = week number of a completed administration, GW = week number of the goal date, y = monitoring score, y’ = predicted monitoring score at that week (from the student’s trendline).The means and sums are calculated across all of the completed monitoring administrations up to that date. Add and subtract 1.25 times the SEE to the projected score, and round to the nearest whole numbers.

What is the evidentiary basis for these decision rules?

The decision rules are statistically rather than empirically based. The guidance statements that result from applying the 75% confidence interval to the projected score are correct probabilistic statements, under certain assumptions: The student’s progress can be described by a linear trend line. If the pattern of the student’s monitoring scores is obviously curvilinear, then the projected score based on a linear trend will likely be misleading. We provide training in the aimsweb Progress Monitoring Guide about the need for users to take non-linearity into account when interpreting progress-monitoring data. The student will continue to progress at the same rate as they have been progressing to that time. This is an unavoidable assumption for a decision system based on extrapolating from past growth.

Even though the rules are not derived from data, it is useful to observe how they work in a sample of real data. For this purpose, we selected random samples of students in the aimsweb 2010–2011 database who were progress-monitored on either Reading Curriculum-Based Measurement (R-CBM) or Math Computation (M-COMP). All students selected scored below the 25th percentile in the fall screening administration of R-CBM or M-COMP. The R-CBM sample consisted of 1,000 students (200 each at of Grades 2 through 6) who had at least 30 monitoring scores, and the M-COMP sample included 500 students (100 per Grades 2 through 6) with a minimum of 28 monitoring scores. This analysis was only a rough approximation, because we did not know each student’s actual goal or whether the intervention or goal was changed during the year.

To perform the analyses, we first set an estimated goal for each student by using the ROI at the 85th percentile of aimsweb national ROI norms to project their score at their 30th monitoring administration. Next, we defined “meeting the goal” as having a mean score on the last three administrations (e.g., the 28th through 30th administrations of R-CBM) that was at or above the goal score. At each monitoring administration for each student, we computed the projected score at the goal date and the 75% confidence interval for that score, and recorded which of the three decision statements was generated (projected not to meet goal, projected to exceed goal, or on-track/no-change).

In this analysis, accuracy of guidance to change (that is, accuracy of projections that the student will not reach the goal or will exceed the goal) reached a high level (80%) by about the 13th to 15th monitoring administration, on average. The percentage of students receiving guidance to not change (i.e., their trendline was not far from the aimline) would naturally tend to decrease over administrations as the size of the confidence interval decreased. At the same time, however, there was a tendency for the trendline to become closer to the aimline over time as it became more accurately estimated, and this worked to increase the percentage of students receiving the “no change” guidance.

Decision Rules for Increasing Goals

GradeK
RatingFull bubble

Does your manual or published materials specify validated decision rules for when changes to increase goals?

Yes

Specify the decision rules:

The same statistical approach described under Decision Rules for Changing Instruction (GOM 9 above) applies to the decisions about increasing a goal. aimswebPlus provides the following guidance for deciding whether to increase a performance goal:

 If the student is projected to exceed the goal and there are at least 12 weeks remaining in the schedule, consider raising the goal.

What is the evidentiary basis for these decision rules? 

See GOM 9 evidentiary basis information above.

Improved Student Achievement

GradeK
Ratingdash

Improved Teacher Planning

GradeK
Ratingdash