Burst:Reading

Study: Dubal, Hamly, Pavlov, Richards, Yambo, et al. (2012)

Dubal, M., Harnly, A., Pavlov, M., Richards, K., Yambo, D., & Gushta, M. (2012). Effects of Burst®:Reading Early Literacy Intervention on Student Performance: 2012 Report. Retrieved from www.amplify.com/redirect/pdf/general/BurstEfficacyStudy.pdf‎.
Descriptive Information Usage Acquisition and Cost Program Specifications and Requirements Training

Burst:Reading delivers highly differentiated reading instruction based on formative assessment data. Using mobile technology for assessment administration, sophisticated data-analysis algorithms to generate lesson plans for small groups, and engaging instruction. Burst:Reading puts every student on his or her most efficient path to reading. Teachers, coaches, specialists, and qualified volunteers deliver 10-day “Bursts” of instruction to small groups of students. These Bursts are generated by the system based on formative assessment results for each student.

Burst:Reading is intended for use in grades K-6. The program is intended for use with any student at risk of academic failure. The academic area of focus is reading (including phonological awareness, phonics/word study, comprehension, fluency, and vocabulary).

Where to obtain: 
Amplify Education, Inc.
55 Washington St., Suite 900 Brooklyn, NY 11201
Phone: 1-800-823-1969, Option 1
Website: www.amplify.com

Cost: $60.00 per student annual license fee.

The annual student license fee provides access to the digital intervention program, including customized curriculum modules and reporting. Teachers are able to use 10-day lesson sequences that are customized for their small groups of intervention students based on the formative assessment results of each student. Additional costs associated with the program include per student licenses to formative assessment (generally $14.90 per student), teacher kits ($215 for K-3; $195 for 4-6) and professional development and implementation support (varies based on nature of the implementation).

It is recommended that Burst:Reading is used in small groups of three to five students.

Burst:Reading takes 30 minutes per session with a recommended 5 sessions per week.

The program includes a highly specified teacher’s manual.

The program requires a computer with internet access in order to generate and view the 10-day sequences of instruction as well as reporting. Educators also need a mobile device for administration of the formative assessment measures. 

Training is required for the instructor. Training beyond 8 hours is required if the user is not familiar with the formative assessment measures, which include DIBELS Next. The length of this training varies based on the implementation. During this training, educators learn to implement Burst:Reading with fidelity, including administering formative assessments, accessing sequences of lessons through the web-based interface, administering instruction, and monitoring success based on results reporting.

The minimum qualifications of instructors are that they must be paraprofessionals. The program does not assume that the instructor has expertise in a given area.

Training manuals and materials are available and have been designed to ensure that educators are prepared to faithfully implement the program. Feedback is regularly solicited on our training sessions during field use and adjustment to sessions and materials are made to continuously improve outcomes.

Additional follow-up sessions and coaching are available. Online technical support and toll-free phone technical support are available at no additional cost.

 

Participants: Unconvincing Evidence

Sample size: 9,220 (4,610 program, 4,610 control)

Risk Status: Within Burst:Reading, students are identified as being at risk of academic failure according to their performance on DIBELS at the beginning and middle of the school year (i.e., at the beginning of each semester). 

The Burst:ELI algorithm determines individual students’ intervention priority and creates intervention groups in the following manner:

  1. Student assessment results are processed, yielding:
    1. A gross skill rating based on DIBELS Benchmark Status or risk category (i.e., Red, Yellow, and Green). The Burst Reading Assessment supplemental measures have comparable performance levels.
    2. A fine-grained skill rating that differentiates intervention priority within each risk category (e.g., students who are Red on DORF are further differentiated and prioritized for intervention based on DORF subscores).
    3. Up to two Zone of Proximal Development (ZPD) skills. A student’s highest priority ZPD skill is the skill that comes earliest in the set instructional sequence that the student tests poorly on, evidencing a need for intervention instruction.
  2. Using the results from step 1, students are prioritized for intervention based on their performance relative to the rest of the students in their class or grade.
  3. The teacher or other educator then selects the number of intervention groups that the Burst algorithm should create. The algorithm generates a number of possible groups and comes to a final decision on grouping based on a social welfare function. The social welfare function selects groups of students for whom the level of utility of the instruction to be delivered is the most similar among all students in the group, yielding homogenous groups.

The table below demonstrates the pre-test means for treatment and control groups in the study on each measure by semester. Additionally, the score ranges associated with the At Risk and Some Risk performance levels for each DIBELS measure are provided. Lastly, scores for each measures associated with the 25th percentile for each measure are presented based on a national norming study conducted by Cummings et al. (2011). While a number of the pre-test means may be above the At Risk range, all score means are clearly below the 25th percentiles associated with national norms.

 

 

Screening Result Mean

Score Range

 

TOY

Measure

Treatment Group

Control Group

At Risk

Some Risk

Score at 25th Percentile

K spring

PSF

8.33

8.38

0-6

7-17

12

1 fall

NWF

9.94

10.00

0-12

13-23

20

1 spring

NWF

31.57

32.30

0-29

30-49

41

2 fall

ORF

19.33

19.34

0-25

26-43

31

2 spring

ORF

33.95

34.57

0-51

52-67

60

3 fall

ORF

39.38

41.07

0-52

53-76

59

3 spring

ORF

52.40

53.85

0-66

67-91

73

Demographics:

 

Program

Control

p of chi square

Number

Percentage

Number

Percentage

Grade level

  Kindergarten

 481

5%

481

5%

 

  Grade 1

 1,533

17%

1,533

17%

 

  Grade 2

 1,855

20%

1,855

20%

 

  Grade 3

741

8%

741

8%

 

  Grade 4

 

 

 

 

 

  Grade 5

 

 

 

 

 

  Grade 6

 

 

 

 

 

  Grade 7

 

 

 

 

 

  Grade 8

 

 

 

 

 

  Grade 9

 

 

 

 

 

  Grade 10

 

 

 

 

 

  Grade 11

 

 

 

 

 

  Grade 12

 

 

 

 

 

Race-ethnicity

  African-American

2,782

30%

2,454

27%

 

  American Indian

8

0.1%

77

1%

 

  Asian/Pacific Islander

48

0.5%

43

0.5%

 

  Hispanic

537

6%

672

7%

 

  White

1,025

11%

1,215

13%

 

  Other

210

2%

149

2%

 

Socioeconomic status

  Subsidized lunch

4,353

47%

4,207

46%

 

  No subsidized lunch

257

3%

403

4%

 

Disability status

  Speech-language impairments

 

 

 

 

 

  Learning disabilities

 

 

 

 

 

  Behavior disorders

 

 

 

 

 

  Intellectual disabilities

 

 

 

 

 

  Other

 

 

 

 

 

  Not identified with a disability

 

 

 

 

 

ELL status

  English language learner

646

7%

585

6%

 

  Not English language learner

3,964

43%

4,025

44%

 

Gender

Female

2,002

22%

1,904

21%

 

Male

2,608

28%

2,706

29%

 

Training of Instructors: Prior to implementing Burst:ELI, school personnel participated in a standardized training series that included a one-day on-site session to prepare teachers or interventionists, a follow-up webinar for teachers or interventionists after 6–10 weeks, and a half-day on-site session to prepare instructional leaders. This training followed a common “see one, do one” model in the class with students, so teachers could quickly learn, through context, how the Burst:ELI instruction should be delivered. Ongoing technical training was also provided to school and district staff to help them install, manage, and troubleshoot the software.

Design: Unconvincing Evidence

Did the study use random assignment?: No

If not, was it a tenable quasi-experiment?: Yes

If the study used random assignment, at pretreatment, were the program and control groups not statistically significantly different and had a mean standardized difference that fell within 0.25 SD on measures used as covariates or on pretest measures also used as outcomes?: N/A

If not, at pretreatment, were the program and control groups not statistically significantly different and had a mean standardized difference that fell within 0.25 SD on measures central to the study (i.e., pretest measures also used as outcomes), and outcomes were analyzed to adjust for pretreatment differences?: Yes

Were the program and control groups demographically comparable at pretreatment?: Yes

Was there attrition bias1?: No

Did the unit of analysis match the unit for random assignment (for randomized studies) or the assignment strategy (for quasi-experiments)?: No

1 NCII follows guidance from the What Works Clearinghouse (WWC) in determining attrition bias. The WWC model for determining bias based on a combination of differential and overall attrition rates can be found on pages 13-14 of this document: http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v2_1_standards_handbook.pdf

 

Fidelity of Implementation: Unconvincing Evidence

Describe when and how fidelity of treatment information was obtained: Burst:Reading is a software-based intervention requiring that educators access the product website to generate student grouping, review student performance reports, and download instructional materials. Fidelity of treatment information was inferred by reviewing teacher access of this website information. Additionally, the timing of assessment data collection during intervention was compared against the expected two-week assessment intervals.

Provide documentation (i.e., in terms of numbers) of fidelity of treatment implementation: Fidelity of implementation analysis was not conducted for the current study.

Fidelity of implementation results were mentioned briefly in the white paper though not included. The results of the analysis from an unpublished paper are provided as follows.

Due to the post-hoc nature of this study, only one component of Burst: ELI intervention fidelity could be partially examined: exposure. Exposure was operationalized according to two types of implementation data that were automatically tracked by the Burst: ELI system:

·         Number of instructions a Burst: ELI student received in a semester or a year; and

·         Timeliness with which progress monitoring assessments were delivered.

The Burst: ELI system uses this data to remind teachers via email to assess students or to begin instruction when they begin to fall behind schedule.

Only Burst:ELI students were included in fidelity of implementation analyses. Those students missing demographic data were included in this analysis, as student characteristics were not used. There were 6,584 kindergarten students, 6,369 first grade students, 4,996 second grade students, and 3,045 third grade students included in the fidelity analysis.

Procedures and metrics

The table below describes our fidelity metrics.

Fidelity metric

Details

Values

Number of Instructions

The number of 2-week-long Burst:ELI instructions (Bursts) the student has received this semester.

0 to 11 for each semester; 0 to 22 for each year.

Probe Rating

This is a measure of the timeliness of progress monitoring assessments. Ideally, students are progress monitored at the end of (or several days before the end of) a Burst:ELI sequence, prior to the beginning of the next instructional sequence, so that the Burst:ELI algorithm can adjust subsequent instructional recommendations according to evolving student needs.

2 = ideal (progress monitoring ≤ 4 days prior to/on day of instruction generation); or instruction was the first instruction of the semester.

1 = not ideal (progress monitoring prior to the ideal assessment window.)

0 = not assessed, assessed after ideal window, or assessed on wrong measure.

* All metrics were calculated per student per semester (instruction rating and probe rating for each semester were based on the median values for all instructional sequences that occurred within the relevant time period).

 

Results

For each semester examined, we calculated the descriptive statistics for the three measures of fidelity of exposure and compared results across all grades and semesters.

TOY

Number of instructions

Probe rating

Range

Mean

Median

SD

Range

Mean

Median

SD

K spring

1-10

4.59

4

2.40

0-2

0.97

1

0.48

1 fall

1-9

4.22

4

1.86

0-2

0.87

1

0.57

1 spring

1-11

4.55

5

2.10

0-2

0.52

0

0.62

2 fall

1-9

4.26

4

1.85

0-2

0.70

1

0.62

2 spring

1-11

4.41

4

1.98

0-2

0.45

0

0.59

3 fall

1-9

3.64

3

1.77

0-2

0.95

1

0.57

3 spring

1-9

3.75

3

1.80

0-2

0.89

1

0.52

 

Correlational Analyses with Score Growth

For each semester and year examined, we further calculated correlations between the fidelity metrics and associated student growth in DIBELS performance across the semester. Score growth was calculated as the difference between pre-and post-score on the appropriate DIBELS measure for each semester. These correlations are given in the table below.

 

TOY

Measure

Correlation with
Number of instructions*

Correlation with
Probe rating*

K spring

PSF

0.15

0.04

1 fall

NWF

0.19

-0.11

1 spring

NWF

0.10

0.02

2 fall

ORF

0.10

-0.05

2 spring

ORF

0.15

-0.06

3 fall

ORF

0.18

-0.08

3 spring

ORF

0.08

0.04

* All correlations significant at α = 0.05.

Small but positive correlations were observed between the number of instructions delivered and score growth at all semesters, indicating that receiving more Burst:ELI instructional sequences is somewhat related to higher student performance. Very strong correlations were not expected between these variables. Students vary in the amount of intervention instruction they need to acquire a skill. Achieving progress monitoring goals for an instructed skill should lead to a modification of instruction to focus on a higher-level skill, although some instructors may choose to terminate Burst:ELI instruction for those students as a result. Students who remain in Burst:ELI for the entire semester, on the other hand, likely need additional time to acquire the target skills. Thus, variability in the number of instructions students receive may be more closely related to how quickly students are able to master a skill than to their performance on related progress monitoring assessments.

Smaller and mostly negative correlations were observed between probe rating and score growth, indicating that higher growth is associated with infrequent or delayed progress monitoring. Students who demonstrate higher growth may be progress monitored less by teachers who acknowledge this growth and, therefore, see a decreased need for assessment. While this may not be in strict adherence with the principles of Burst:ELI, it does follow in typical RtI practice. Within RtI models, it is typically recommended that the students in the most need of intervention supports be monitored more often (e.g., once every two weeks) than students with less significant need (e.g., once a month).

 

 

Measures Targeted: Partially Convincing Evidence

Measures Broader: Data Unavailable

Targeted  Measure Reliability Statistics Relevance to Program Instructional Content Exposure to Related Content Among Control Group

DIBELS Phoneme Segmentation Fluency (PSF)

0.88 (alternate form; two weeks)
0.79 (alternate form; one month)

Specific phonological awareness needs for students are determined using data. Lesson sequences matched to the data-identified needs of students that include phonological awareness skills of increasing difficulty are developed. Students learn to identify rhyme and syllables, segment words by onset and rhyme, and blend and segment phonemes in spoken words.

PSF is a general indicator of overall phonological awareness skills that asks students to segment spoken words into their smallest units, phonemes.

The control condition was considered a “business-as-usual” condition, meaning that no other treatment was given to the control students by the researchers, but that interventions other than Burst:Reading could in theory have been delivered in these schools. At least 65% of districts that use the mCLASS:DIBELS assessment (and not Burst: Reading) have a Response-to-Intervention (RTI) system in place which involves the implementation of targeted interventions for students struggling in reading based on screening data. We do not have information about the percentage of students in each school who participated in a different intervention, nor do we have specific information about what skills were the focus of such intervention programs. However, many early literacy intervention programs target phonological awareness, phonics/word study, and fluency skills; thus, it is likely that many students in the control group received similar instruction as that provided in Burst: Reading. Further, all control schools administered mClass:DIBELS so they likely targeted similar skills for their students in need of intervention.

DIBELS Phoneme Segmentation Fluency (PSF) is an appropriate targeted measure of Burst:ELI outcomes in Kindergarten as it is directly related to the instructional content provided and measures a skill that is highly predictive of future reading success. Specifically, because Burst is an intervention program for at-risk readers, direct and explicit instruction on phonemic awareness is emphasized for Kindergarten students. The importance of this skill has been well documented in the work of Torgesen who recommends that phonemic awareness be explicitly taught and measured as a means for reducing reading difficulties and disabilities, especially for students who are at-risk (Torgesen, Wagner & Rashotte, 1994) as has the construct validity and predictive validity of the PSF measure (Good, Kaminski, Dewey, Powell-Smith, & Latimer, 2013). 

DIBELS Nonsense Word Fluency (NWF)

0.83 (alternate form; one month)

Specific phonics/word analysis needs for students are determined using data. Lesson sequences matched to the data-identified needs of students that include phonics/word analysis skills of increasing difficulty are developed. Students learn letter-sound correspondence, sounding out and blending regular words with simple patters such as VC and CVC, common letter-combination sounds, and advanced phonics skills.

Students practice these skills in isolation and in the context of short sentences that are 100% decodable.

Students are taught to read real words.

NWF is a general indicator of overall basic phonics skills that asks students to read VC and CVC pseudo-words as whole words or sound by sound. This requires students to rely on their decoding skills rather than their sight word knowledge to succeed.

 

The control condition was considered a “business-as-usual” condition, meaning that no other treatment was given to the control students by the researchers, but that interventions other than Burst:Reading could in theory have been delivered in these schools. At least 65% of districts that use the mCLASS:DIBELS assessment (and not Burst: Reading) have a Response-to-Intervention (RTI) system in place which involves the implementation of targeted interventions for students struggling in reading based on screening data. We do not have information about the percentage of students in each school who participated in a different intervention, nor do we have specific information about what skills were the focus of such intervention programs. However, many early literacy intervention programs target phonological awareness, phonics/word study, and fluency skills; thus, it is likely that many students in the control group received similar instruction as that provided in Burst: Reading. Further, all control schools administered mClass:DIBELS so they likely targeted similar skills for their students in need of intervention.

DIBELS Nonsense Word Fluency (NWF) is an appropriate targeted measure in Burst:ELI outcomes in Grade 1 as it is directly related to instructional content provided and measures a skill that is highly predictive of future reading success. Specifically, direct and explicit instruction on the alphabetic principle or phonics is emphasized for first grade students. The importance of phonics skills has been well documented (e.g., National Reading Panel, 2000) as has the construct validity and predictive validity of the NWF (Good et al., 2013). 

DIBELS Oral Reading Fluency (ORF)

0.92-0.97 (test-retest)
0.89-0.94 (alternate form)

Data is used to determine whether students have fluency needs that are not attributable to needs in phonological awareness or phonics/word analysis skills. Students engage in repeated readings of grade-level text with fluency and expression emphasized after hearing teacher models of fluent and expressive reading.

Burst: Reading also includes instruction focused on vocabulary and comprehension.

ORF is a general indicator of overall reading proficiency including comprehension that asks students to read short, grade-level passages aloud at an appropriate rate and with appropriate accuracy, and teachers can rate their expression with a qualitative scale.

The control condition was considered a “business-as-usual” condition, meaning that no other treatment was given to the control students by the researchers, but that interventions other than Burst:Reading could in theory have been delivered in these schools. At least 65% of districts that use the mCLASS:DIBELS assessment (and not Burst: Reading) have a Response-to-Intervention (RTI) system in place which involves the implementation of targeted interventions for students struggling in reading based on screening data. We do not have information about the percentage of students in each school who participated in a different intervention, nor do we have specific information about what skills were the focus of such intervention programs. However, many early literacy intervention programs target phonological awareness, phonics/word study, and fluency skills; thus, it is likely that many students in the control group received similar instruction as that provided in Burst: Reading. Further, all control schools administered mClass:DIBELS so they likely targeted similar skills for their students in need of intervention.

DIBELS ORF is an appropriate targeted measure of Burst:ELI outcomes in Grades 2 and 3 given that the emphasis of instruction within Burst for students in grades 2 and 3 is on building fluency and comprehension. Further, fluency measures, including ORF, have been shown to be indicators of students’ and directly related to overall reading skills including comprehension (Fuchs, Fuchs, Hosp, & Jenkins, 2001; Pinnell, et al., 1995; Carver, 1990). Pikulski and Chard (2005) define fluency as follows:

Reading fluency refers to efficient, effective word-recognition skills that permit a reader to construct the meaning of text. Fluency is manifested in accurate, rapid, expressive oral reading and is applied during, and makes possible, silent reading comprehension (pp. 510).

The importance of reading fluency for comprehension outcomes was highlighted by LaBerge and Samuels (1974) and Stanovich (1980). They explain that reading requires both accurate decoding and comprehension or construction of meaning, and readers cannot successfully focus attention on both processes. If readers need to focus attention on decoding words little attention is left for comprehension of text. Readers must read fluently to free up attention to devote to reading with comprehension.

ORF is a general outcomes measurement or curriculum-based measurement (CBM) tool that has long been used to measure students’ overall reading skills. General outcomes measures are purposely designed to be brief and efficient indicators of general skills and are not intended to provide in depth information about a specific skill area. There is extensive empirical evidence to document the correlation between ORF and other comprehension outcome measures that are more lengthy and have more face validity and predictive validity than ORF for overall reading outcomes (e.g., reading passages and responding to associated questions; see Fuchs, Fuchs, Hosp, & Jenkins, 2001; Goffreda, DiPerna, & Pedersen, 2009; Jenkins et al., 2003; Shinn et al., 1992).

Research has also documented the utility of ORF for instructional decision-making. When teachers collect ORF data, graph the results, and make decisions based on systematic decision rules, student outcomes are improved (see Stecker, Fuchs, and Fuchs, 2005 for a review of the research on CBM). 

 

Broader Measure Reliability Statistics Relevance to Program Instructional Content Exposure to Related Content Among Control Group

NA

 

 

 

 

Number of Outcome Measures: 1 Prereading, 6 Reading

Mean ES - Targeted: 0.11*

Mean ES - Broader: Data Unavailable

Effect Size:

Targeted Measures

Construct Measure Effect Size
Prereading PSF: K Spring 0.19**
Reading NWF: 1st Fall 0.09
Reading NWF: 1st Spring 0.06
Reading ORF: 2nd Fall 0.03
Reading ORF: 2nd Spring 0.17***
Reading ORF: 3rd Fall 0.06
Reading ORF: 3rd Spring 0.15*

 Broader Measures

Construct Measure Effect Size
  NA  

 

Key
*       p ≤ 0.05
**     p ≤ 0.01
***   p ≤ 0.001
–      Developer was unable to provide necessary data for NCRTI to calculate effect sizes
u      Effect size is based on unadjusted means
†      Effect size based on unadjusted means not reported due to lack of pretest group equivalency, and effect size based on adjusted means is not available

 

Visual Analysis (Single Subject Design): N/A

Disaggregated Data for Demographic Subgroups: Yes

Dissagregated Targeted Measures: African American

Construct Measure Effect Size
Prereading PSF: K Spring 0.19*
Reading NWF: 1st Fall 0.06
Reading NWF: 1st Spring 0.09
Reading ORF: 2nd Fall 0.10
Reading ORF: 2nd Spring 0.13*
Reading ORF: 3rd Fall 0.12
Reading ORF: 3rd Spring 0.20*

Dissagregated Targeted Measures: Hispanic

Construct Measure Effect Size
Prereading PSF: K Spring 0.17
Reading NWF: 1st Fall 0.33*
Reading NWF: 1st Spring -0.18
Reading ORF: 2nd Fall -0.01
Reading ORF: 2nd Spring 0.12
Reading ORF: 3rd Fall -0.05
Reading ORF: 3rd Spring 0.08

 Dissagregated Targeted Measures: Caucasian

Construct Measure Effect Size
Prereading PSF: K Spring 0.20
Reading NWF: 1st Fall 0.06
Reading NWF: 1st Spring 0.08
Reading ORF: 2nd Fall -0.07
Reading ORF: 2nd Spring 0.25*
Reading ORF: 3rd Fall 0.00
Reading ORF: 3rd Spring 0.00

Dissagregated Targeted Measures: ELL

Construct Measure Effect Size
Prereading PSF: K Spring 0.01
Reading NWF: 1st Fall 0.20
Reading NWF: 1st Spring 0.01
Reading ORF: 2nd Fall -0.01
Reading ORF: 2nd Spring 0.12
Reading ORF: 3rd Fall -0.09
Reading ORF: 3rd Spring 0.18

Dissagregated Targeted Measures: FRPL

Construct Measure Effect Size
Prereading PSF: K Spring 0.18**
Reading NWF: 1st Fall 0.08
Reading NWF: 1st Spring 0.07
Reading ORF: 2nd Fall 0.06
Reading ORF: 2nd Spring 0.18*
Reading ORF: 3rd Fall 0.04
Reading ORF: 3rd Spring 0.19**

Disaggregated Broader Measures

Construct Measure Effect Size
  NA  

 

Key
*       p ≤ 0.05
**     p ≤ 0.01
***   p ≤ 0.001
–      Developer was unable to provide necessary data for NCRTI to calculate effect sizes
u      Effect size is based on unadjusted means
†      Effect size based on unadjusted means not reported due to lack of pretest group equivalency, and effect size based on adjusted means is not available

 

Disaggregated Data for <20th Percentile: No

Administration Group Size: Small Group, (n=3-5)

Duration of Intervention: 30 minutes, 5 times a week, Multiple weeks

Minimum Interventionist Requirements: Paraprofessional, Training time varies and can be more than 8 hours

Reviewed by WWC or E-ESSA: No

What Works Clearinghouse Review

This program was not reviewed by What Works Clearinghouse.

 

Evidence for ESSA

This program was not reviewed by Evidence for ESSA.

 

Additional Research Studies

This program has no additional research studies at this time.

Other Research: Potentially Eligible for NCII Review: 0 studies