Technical Review Committees Process

The National Center on Intensive Intervention has established a standard process to evaluate the scientific rigor of tools and interventions that can be used as part of a data-based individualization program for educating student with disabilities who require intensive intervention due to persistent learning and behavior problems. The review process consists of five steps: (1) Identification of Tools and Interventions for Review; (2) First- and Second-Level Review; (3) Interim Communication with Vendors; (4) Third-Level Review; (5) Finalization and Publication of Results. A detailed description of each of these steps follows.

Step 1: Identification of Tools and Interventions for Review

The first step is the identification of tools and interventions for review. For progress monitoring tools and commercial (or “branded”) interventions, vendors are invited to respond to a call for submissions issued by the Center. This call for submissions is distributed widely through the Center website, various email newsletters, and to all vendors who have contacted the Center expressing an interest in submitting tools and interventions. All submitters are required to complete an evaluation form developed by the Center's TRCs. Each TRC has identified standards of technical adequacy that are critical to the Center's definition of data-based individualization. The evaluation form, The Standard Protocol for Evaluating Intensive Interventions and Tools (the protocol), asks an extensive array of questions related to these technical adequacy standards, and also encourages vendors to submit any accompanying evidence that is relevant. Vendors are given six weeks to respond to the call for submissions.

NCII recognizes that some interventions are “non-branded,” meaning that they have not been developed nor are they owned or sold for profit by a commercial vendor or researcher. Nonetheless, many of these non-branded interventions are in fact well-known and commonly used strategies for addressing intensive academic and/or behavioral needs. Therefore, each year, the TRC identifies one or two of these non-branded interventions to review and include on the chart.  One TRC members acts as the “vendor” and fills out the evaluation protocol.  The TRC member selects up to ten studies that are the most recent, that are the most rigorous in terms of their design, and which most closely adhere to the original purpose of the intervention. These criteria for selecting studies are the same criteria that are recommended to vendors who fill out a protocol for their own intervention.

Once the submissions are received, Center staff checks each submission for completeness. The criteria for a complete submission vary by TRC, and are explained in the protocol instructions. If the submission does not meet the requirements explained in the protocol instructions, the vendor will be notified and given the opportunity to resubmit with additional information so long as the submission period remains open.

Step 2: First- and Second-Level Review

The next step is the first-level review. Submissions that meet the basic requirements are randomly assigned to two TRC members who do not have any conflict of interest with the particular intervention or tool. In an attempt to ensure the integrity and independence of the evaluation process and final recommendations, all members of the TRC were asked to disclose all contractual obligations and affiliations with educational testing and measurement firms/organizations to avoid any actual or apparent conflict of interests.

During the first level of review, TRC members are asked to review and rate each intervention or tool independently. Each reviewer does not know which other reviewer is assigned to the intervention or tool, nor does he/she know what the other reviewer's ratings are. Using an online review system, each TRC member enters his or her ratings and corresponding comments. When done, reviewers lock in their ratings, signifying the completion of the first-level review.

Once both reviewers assigned to a review have locked in their first-level ratings, the second-level review begins. During the second-level review, the reviewers see each other’s ratings and are asked to come to a consensus rating for each standard and to provide corresponding questions or comments. The questions and/or comments are used to relate the preliminary rating information to the vendor (see Step 3).
 
Step 3: Interim Communication with Vendors
 
The third step involves communicating the results of the second-level review to the vendors. Center staff compiles the results of the second-level review and prepares a summary sheet for each vendor containing the ratings and summary of the reviewer comments. Vendors are then invited to submit additional evidence if appropriate. In some cases, the reviewers may ask to see more information. The vendors are given two weeks to provide this additional evidence.
 
Step 4: Third-Level Review
 
The next step is the third-level review. At this stage, any additional evidence provided by the vendors during Step 3 is distributed to the TRC reviewers, who may then adjust their ratings and comments accordingly. Co-reviewers work together to assign a final rating based on this evidence.
 
Step 5: Finalization and Publication of Results
 
The last step in the process is to finalize and publish the results. Once the ratings are finalized, the Center conducts a debrief session for the entire TRC. During the debriefing all of the TRC members know what the overall results are and what the tools chart will look like. This is an opportunity for TRC members to discuss any concerns they may have about the results. Finally, the results are published in a consumer-friendly tools chart that is posted on the Center's website.
Interventions and tools that are published on the chart remain on the chart for the duration of the Center's funding period. Vendors may re-submit with new evidence in subsequent years if they would like to improve their ratings.