Imágenes de páginas
PDF
EPUB

The Production of Information

Evaluation

The Office of Planning, Budget, and Evaluation made awards in both 1980 and 1985 primarily through contracts rather than grants. However, in 1980, the award mechanisms were more diverse. In recent years, OPBE has made long-term awards for policy analyses. These were not, however, comparable to awards to laboratories and centers, since the OPBE awards are used for the studies specified by the department that provide data collection and analytic support with a fast turnaround and, therefore, are more comparable to contracts.

Summary

The cumulative result of the shifts in awards is that the majority of the
department's information producers are institutions or contractors. That
is, since NCES and the Office of Planning, Budget, and Evaluation never
made many awards for grants to individual researchers, NIE was the pri-
mary source of such support. This source, in essence, dried up during
the period of our review.

In terms of the implications for educational information, contracts typi-
cally involve a greater specification of the questions to be investigated
and study design. Also, the products of contracts are typically reviewed
by the funding agency before release, whereas the products of grants
are typically required after release. While contracts may be most appli-
cable when there is a specific request for information (for example, a
congressionally mandated study) or when continuity in data gathering is
necessary (for example, in a statistical series), their use as the predomi-
nant vehicle for funding research is likely to constrain inquiry.

Chapter 3

The Quality of Information

If information is to inform debates, guide actions, or assess changes, it
has to be high in quality. We reviewed evidence regarding four dimen-
sions of quality-relevance, timeliness, technical adequacy, and
impact for the National Assessment of Educational Progress, the Com-
mon Core of Data for elementary and secondary education, and the Fast
Response Survey System. We assessed changes in quality and factors
associated with them in each program. In this chapter, we present our
case studies on these three programs and then describe practices associ-
ated with each dimension of quality.

In general, NAEP ranked high on all four quality dimensions, but it has suffered some decline in relevance and timeliness in adapting to fiscal constraints. CCD was not rated high on any of the four indicators. Data were not comparable across states; mainly, they were reported at different levels of aggregation or used different definitions and procedures. Further, we could find little evidence on the use of CCD in policy decisions. Problems with CCD have long been recognized, but few have been solved. FRSS was rated moderate to high on quality, especially given the low budgets associated with each survey. It was responsive to the information needs of the requester and minimized time delays by releasing findings early. It appeared to be technically adequate, but the reporting of procedures could be improved.

The case studies reveal several practices associated with high ratings on the quality dimensions. Relevance was increased through the addition of data elements, the tailoring of data collection to the information request, and flexible dissemination. Timeliness was improved by early release of data and diverse formats for dissemination. Technical adequacy was improved through appropriate quality-control procedures and the use of research to assess the credibility of the data.

The National
Assessment of
Educational Progress

Purpose and Background

NAEP is a congressionally mandated survey of the knowledge, skill, understanding, and attitudes of young Americans. Although the survey was not mandated until 1978 (20 U.S.C. 1221e), the department began funding NAEP data collection in 1968. Since then, more than 1 million 9-,

The Quality of Information

13-, and 17-year-olds and adults 26 to 35 years old have been assessed. Assessments have been conducted in 10 major school-related areas, but each content area has been assessed at staggered and varying intervals. Because of its sampling format, NAEP is flexible with regard to topic coverage and the target population that is surveyed. On several occasions, small-scale assessments have been added to the NAEP sampling frame and data collection procedures (for instance, the young-adult literacy assessment funded by NIE). NAEP's topic coverage and schedule since 1969 are in table III.2. (The funding history is in table III.1.)

The purposes of NAEP have changed over time. NAEP was originally conceived of as a means of obtaining a national accounting of educational progress. Because fears were expressed that NAEP could be used to devise a national curriculum and thereby encroach on the states' authority, the founders of NAEP deliberately devised the assessment so that it could not be used to derive state-to-state comparisons. Also, the original assessment format could not provide an overall score for an individual student. (Because each student was not tested on all items, matrix sampling of items was used in constructing the test.)

To minimize federal intervention, NAEP was originally conducted by a state-based consortium-the Education Commission of the States. Before 1979, federal funding was portioned out by cooperative agreements between NCES and the commission. In response to the 1978 congressional mandate, NIE assumed responsibility for NAEP and initiated a competitive grant framework. The only bidder was awarded a 3-year grant. After a two-stage competition in 1983, the Educational Testing Service (ETS) won a 5-year grant.

NAEP was awarded about $6 million in 1985, similar to the allocation in 1972. However, the current purchasing power is about $2.4 million, a 59-percent decline.

Relevance

Over the past decade, NAEP's relevance to federal, state, and local stake-
holders has been a main reason for criticism. Over the past several
years, NAEP has tried to address this concern by collecting extensive data
on students' backgrounds, attitude variables, and educational condi-
tions; expanding its policy committee's role in the review and develop-
ment of background and attitude questions; and increasing the
dissemination of and technical assistance for NAEP-generated material to
states and local school districts.

Timeliness

The Quality of Information

However, other changes in NAEP's design have made it less relevant for
answering certain types of questions. In particular, in 1969-73, five tar-
get populations (9-, 13-, and 17-year-olds in and out of school and
adults) were assessed annually (see table III.3). In later years, from 1977
on, the number of target populations was reduced from five principal
groups to three (9-, 13-, and 17-year-olds who remained in school).
Assessments for specialized groups (for example, dropouts) were con-
ducted on only two occasions in the past decade. In our 1976 assessment
of NAEP, we attributed the decision to suspend data collection for young
adults to budgetary restrictions. At the time of that review, this action
was characterized as temporary. The pattern of assessment since 1976
suggests that budgetary restrictions have had a lasting effect.

The relevance of NAEP for assessing change is inherently limited by the frequency of the data collection. Because the time intervals between assessments have been lengthened, NAEP's ability to examine specific types of questions has diminished. For example, in a recent report on educational achievement, the Congressional Budget Office asserted that although NAEP has been able to document long-term trends in achievement, the intervals between assessments are too wide to ascertain precisely when declines or increases occurred.2 The frequency with which an area can be assessed is also limited by the nature of the assessment process; if the interval is too brief, there may not be enough time to analyze and interpret the data. Further, capitalizing on lessons learned from each assessment to improve subsequent assessments might also be hindered with shorter testing intervals.

Timeliness can be thought of in two ways: the timeliness of the assessment and the timeliness of reporting and disseminating other information products such as technical reports, bulletins, and public-use data tapes.

Timeliness of Assessment

As we already noted, NAEP's skill areas have been assessed in rotation. This means that the most recent data available for reading may be more than 2 years old and for other areas, such as career and occupational development, up to 12 years old. Furthermore, for areas that have been

1U.S. General Accounting Office, The National Assessment of Educational Progress: Its Results Need to Be Made More Useful, GAO/HRD-76-113 (Washington, D.C.: July 1976).

2Congressional Budget Office, Trends in Educational Achievement (Washington, D.C.: April 1986).

The Quality of Information

reassessed, the intervals have been variable. (The pattern of testing over the history of NAEP is given in table III.2.)

Several features of the assessment schedule are worth noting. Reducing the 10 "content domains" to the five "core" areas (reading, science, mathematics, social studies and citizenship, and humanities) was the result of budgetary constraints. Further, whereas prior to 1980 NAEP assessments were conducted annually, the interval between assessments increased from 1 to 2 years in 1980. Budgetary restrictions have also been a factor in this decision, according to the current grantee.

Recent changes in policy, however, have improved the timeliness of NAEP
by making the assessment intervals more regular. Reading is scheduled
for assessment every 2 years and other content areas have been put on a
4-year or 6-year cycle. There are several technical advantages to this
change; for example, students at different grade levels can be
contrasted.

Timeliness of Reporting

Technical Adequacy

There have been recent attempts to report NAEP results in a more timely fashion. Further, efforts to disseminate results were recently enhanced through the development of additional nontechnical products. A particular example is the "NAEPgram" recently developed by ETS, the grantee, as a means of informing the educational community of assessment results. ETS reported mailing 100,000 copies of the first "NAEPgram” to all elementary and secondary school principals and other professionals.

In an additional attempt to facilitate the dissemination of findings and
improve public access to NAEP information, NIE developed the National
Assessment of Educational Progress Information Retrieval System
(NAEPIRS). This computerized data base contains findings and descrip-
tions of assessments of 9-, 13-, and 17-year-old students, allowing the
users to tailor their assessments (for example, to examine specific sub-
groups or unreported NAEP data). The department reported that 4,000
copies of the data base had been put into circulation by May 1986.

The technical adequacy of NAEP has been highly regarded in the education community and it has improved over time. In several instances, technical advances have resulted in increased relevance at state and local levels. Standardized age definitions coupled with alterations in the assessment cycle now make it possible to examine differences between groups in a given subject area. Sampling and reporting by grade level (in

« AnteriorContinuar »