Answer:
As educators face increasing pressure
from federal, state, and local accountabil-
ity policies to improve student achieve-
ment, the use of data has become more
central to how many educators evaluate
their practices and monitor students’ aca-
demic progress.1 Despite this trend, ques-
tions about how educators should use data
to make instructional decisions remain
mostly unanswered. In response, this
guide provides a framework for using stu-
dent achievement data to support instruc-
tional decision making. These decisions
include, but are not limited to, how to
adapt lessons or assignments in response
to students’ needs, alter classroom goals
or objectives, or modify student-grouping
arrangements. The guide also provides
recommendations for creating the orga-
nizational and technological conditions
that foster effective data use. Each rec-
ommendation describes action steps for
implementation, as well as suggestions
for addressing obstacles that may impede
progress. In adopting this framework, edu-
cators will be best served by implement-
ing the recommendations in this guide
together rather than individually.
The recommendations reflect both the ex-
pertise of the panelists and the findings
from several types of studies, including
studies that use causal designs to examine
the effectiveness of data use interventions,
case studies of schools and districts that
have made data-use a priority, and obser-
vations from other experts in the field. The
research base for this guide was identi-
fied through a comprehensive search for
studies evaluating academically oriented
data-based decision-making interventions
and practices. An initial search for litera-
ture related to data use to support instruc-
tional decision making in the past 20 years
yielded more than 490 citations. Of these,
64 used experimental, quasi-experimental,
1. Knapp et al. (2006).
and single subject designs to examine
whether data use leads to increases in
student achievement. Among the studies
ultimately relevant to the panel’s recom-
mendations, only six meet the causal va-
lidity standards of the What Works Clear-
inghouse (WWC) and were related to the
panel’s recommendations.2
To indicate the strength of evidence sup-
porting each recommendation, the panel
relied on the WWC standards for determin-
ing levels of evidence, described below and
in Table 1. It is important for the reader to
remember that the level of evidence rating
is not a judgment by the panel on how ef-
fective each of these recommended prac-
tices will be when implemented, nor is it
a judgment of what prior research has to
say about the effectiveness of these prac-
tices. The level of evidence ratings reflect
the panel’s judgment of the validity of
the existing literature to support a causal
claim that when these practices have been
implemented in the past, positive effects
on student academic outcomes were ob-
served. They do not reflect judgments of
the relative strength of these positive ef-
fects or the relative importance of the in-
dividual recommendations.
A strong rating refers to consistent and
generalizable evidence that an inter-
vention strategy or program improves
outcomes.3
A moderate rating refers either to evidence
from studies that allow strong causal con-
clusions but cannot be generalized with
assurance to the population on which a
recommendation is focused (perhaps be-
cause the findings have not been widely
2. Reviews of studies for this practice guide ap-
plied WWC Version 1.0 standards. See Version 1.0
standards at http://ies.ed.gov/ncee/wwc/pdf/
wwc_version1_standards.pdf.
3. Following WWC guidelines, improved out-
comes are indicated by either a positive, statisti-
cally significant effect or a positive, substantively
important effect size (i.e., greater than 0.25).