Imágenes de páginas
PDF
EPUB

The Quality of Information

collects information on what each state pays its teachers, a critical resource expenditure.

Timeliness

Several recent reviews have concluded that the center's activities have been inadequate with regard to timeliness, a problem that dates back 30 years. For example, a paper commissioned by NCES noted that by July 1985, data on enrollment by grade were no more recent than fall 1983.8

To provide a better understanding of such deficiencies, we examined the age of data on elementary and secondary education reported in the 1980 and 1983-84 editions of the Digest of Education Statistics (in September 1986, the latter was the most recent publication). Since the Digest reported both CCD information and data gathered in other ways (such as surveys) and by other organizations (the Census Bureau and the National Education Association), we report our analysis of CCD and nonCCD data separately.

Table 3.1 shows that the age of data reported in each issue of the Digest
for both 1980 and 1983-84 ranged from 6 months to more than 10 years.
Judging from the age of other non-CCD data reported by NCES and other
agencies, the delays associated with CCD are not unique. However, in
1980, other agencies or sources produced a higher percentage of rela-
tively current data (12 months old or less). Comparing 1983-84 with
1980 indicates an increase in age, particularly for data generated under
the CCD system of reporting. In addition, the 3-year interval between
Digests (in this instance, between 1983 and 1986) means that the most
recent information can be even more out of date. Therefore, we concur
with the critics who regard the timeliness of routinely reported data as
a serious concern.

8G. Hall et al., Alternatives for a National Data System on Elementary and Secondary Education (Washington, D.C.: Center on Education Statistics, December 20, 1985).

[blocks in formation]

Technical Adequacy

Availability

Source: Some of the 1980 CCD data are from the Elementary and Secondary Education General Information Survey. The remainder of the data are from CCD.

One reason for these reporting lags is that the review and publication processes require more time now than in previous years. For example, the 1986 Digest was submitted for review in August 1985 but not published until 13 months later. In contrast, the draft Digest for 1964 was submitted for review in June and published 3 months later. One official noted that reducing the length of the review process could reduce the lag between data collection and publication of the results, especially since the 1986 document was changed very little by the review.

A second reason for the publication lags is the timeliness of state reporting. It takes longer now than in previous years to obtain the information required from the state education agencies. We were not able to determine the reasons for this, but pragmatic steps might be to establish cutoff dates and to use estimates for delinquent states.

As we noted above, CCD is composed mostly of data derived from state administrative records. The system was designed to provide a census of schools and local and state education agencies. The accuracy of the data and their comparability across state education agencies is of central concern for this type of information. Our review reveals limited evidence on changes in the quality of CCD-derived data. We focus on the availability, comparability, and accuracy of selected data elements.

Whether information from administrative records can be reported to the department depends on whether and how they are maintained by state education agencies. The most recent and complete assessment of these issues as they pertain to CCD was conducted by the State Education

The Quality of Information

Table 3.2: Data Elements Available at the
School Level of Aggregation

Assessment Center. This work, known as the Education Data Improvement Project, was supported by the Council of Chief State School Officers and NCES.9 State by state, the project examined the comprehensiveness and comparability of selected data elements, some of which were part of CCD while others were deemed important enough to be added to CCD.

The Education Data Improvement Project showed that the states differ substantially in the availability of data elements. For example, all the state education agencies that participated in the study can report enrollment or membership data on public school students but only 80 percent (including the District of Columbia) can report similar data for nonpublic schools.

To gain a better understanding of this diversity, we examined the percentage of states that maintained each of 35 data elements at the school district level. Twelve data elements were part of CCD and the remainder were identified by Education Data Improvement Project staff as elements important enough to include in CCD. Since the states can differ in the level of aggregation they maintain for each data element, table 3.2 displays the frequency of data elements available at the school level of aggregation: only 2 of the CCD data elements are available at the school level for 40 or more of the states, but 11 of the 23 proposed data elements are available at the school level for 30 or more states.

[blocks in formation]

Source: The school universe file of the Common Core of Data for 48 states and the District of Columbia reported in Council of Chief State School Officers, Summary: State Collection Practices on Universe Data Elements (Washington, D.C.: U.S. Government Printing Office, September 1986).

"The goals of the Education Data Improvement Project are to describe state collection of CCD data elements, describe other elements that might make it more adequate and appropriate for reporting on the condition of the nation's schools, and recommend to CES and the states ways for making it more comprehensive, comparable, and timely.

The Quality of Information

[merged small][merged small][ocr errors][ocr errors][ocr errors]

Data on schools and local and state education agencies must be comparable to be useful. Critics of CCD have argued that the data are not comparable because definitions of variables differ within and across states. For example, NCES and others note that school attendance is defined to include "excused absences" in California but not in other states.

The Education Data Improvement Project assessed, in detail, the similar-
ity of definitions and procedures for enrollment, fall enrollment, mem-
bership, and average daily membership. A comparison of state
definitions and procedures with those prescribed by NCES showed that
many of the states that collect these data elements are consistent in
their definitions of "enrollment" (27 of 32 states, or 84 percent), “mem-
bership" (40 of 40 states, or 100 percent), and “average daily member-
ship" (40 of 40 states, or 100 percent). In contrast, most of the states
(44) maintained data labeled "fall enrollment," but few (only 17, or 39
percent) agreed with a common definition. The "fall enrollment" defini-
tions differed in the date used to establish enrollment (spanning from
either September or October) or in criteria (different numbers of days
that had to pass before taking the count). Many of the states that agreed
on definitions of the various data elements differed in the procedures
they used for calculating them. To explain these and other state-to-state
differences, the project's staff observed that NCES is often inconsistent in
the use of terms on data collection forms and in the guidelines for com-
pleting them.

Precise estimates of the degree to which CCD elements are in error are difficult to obtain. Although NCES planned in the early 1980's to develop a program of quality-control studies of the data in its collection (similar to that conducted by the Bureau of the Census for its current population survey), comprehensive assessments were not carried out. Reviewers of NCES activities have illustrated technical problems by making selective comparisons that may not represent all elements of the CCD data base. They have found

estimates of dropout rates that differed by 50 percent,

estimates of school discipline problems that differed by a factor of 10, depending on the source,

vocational education enrollments in some states that exceeded their entire high school populations, and

estimates of the size of the population of students with limited proficiency in English that differed by as much as 200 percent.

The Quality of Information

These examples illustrate some serious inaccuracies, but assessments of their prevalence within CCD have not been undertaken, although a 1985 assessment conducted as part of the Projections of Education Statistics program within NCES provides some indirect evidence. The accuracy of a projection is determined by the adequacy of the projection methods (NCES used methods developed by the Bureau of the Census) and the consistency of the base data (drawn from CCD) over time, so that its analysis provides a partial basis for evaluating the degree of error in the data. This may not help detect reporting biases that may persist from year to year, but differences between projected and reported values provide some evidence of the magnitude of year-to-year instabilities and other errors. Flaws in the projection methodology will also contribute to such differences, but for short-run forecasts, inaccuracies in the data used in the projection are likely to contribute more to the projection errors than is the projection methodology.

NCES has been making projections of student enrollment, instructional staff, degrees awarded, and expenditures for elementary, secondary, and postsecondary education since the mid-1960's. In 1985, NCES staff assessed the accuracy of their 1966-82 projections by examining how closely earlier projections resembled data reported later for those same years. For example, enrollments predicted for 1980 were compared to the actual enrollments in 1980. For this assessment, multiple sets of projections (1 to 10 years) were examined, and the average absolute percentage of projection error was used to assess the general accuracy of the NCES projections for enrollment, instructional staff, and degrees awarded.

With short forecast horizons (1 to 2 years), primary and secondary school enrollment projections were in error by less than 1 percent as seen in table 3.3. Projection errors were higher for number of high school graduates-1 to 2 percent for short forecast horizons—and less than 2 percent for instructional staff. Table 3.3 also shows that these are considerably less accurate as the forecast horizon increases, especially for 10-year projections. Although indirect, the short forecast projections suggest that for some variables, inaccuracies might be fairly small even if all the errors detected were the result of problems in the CCD data.

« AnteriorContinuar »