+353-1-416-8900REST OF WORLD
1-800-526-8630U.S. (TOLL FREE)

Measuring Institutional Performance Outcomes

  • ID: 42746
  • Report
  • January 1998
  • 83 pages
  • American Productivity & Quality Center, APQC
1 of 4
Since the mid-1980s, assessment and performance measures have grown steadily
in importance at American colleges and universities. Most such measures
began because they were required for accountability purposes. Like most
public enterprises, state-funded higher education institutions must increasingly show “results” to justify continued public funding. State accountability requirements are reinforced by accrediting procedures, which now demand assessment of both public and private institutions as an element of quality assurance. These demands, moreover, are part of an international trend. Colleges and universities in Western Europe, in Australia/New Zealand, and increasingly in South America, are also being required to demonstrate their effectiveness in this manner.

Mandating measures for accountability purposes, however, does not ensure their
use by institutions in making improvements. Indeed, many observers (including the state officials who prescribed such measures) have been disappointed at the degree to which colleges and universities have employed assessment and performance measures to actively manage their internal affairs. The real stimulus to do so, though, has in most cases only been fairly recent and has not come from government sources. Instead—like most other enterprises—higher education institutions now face daunting dilemmas of increasing their productivity within fixed costs, responding to escalating (and changing) demands for service, incorporating new technologies and ways of doing business, and adjusting quickly to shifts in their external operating environments. Under these conditions, experience in other sectors has shown, the use of performance measures as guides to planning and restructuring can be extremely valuable.

Yet, actually harnessing such information for purposes of internal improvement has proven extraordinarily difficult. Measures of learning outcomes are still regarded with some suspicion among academics because of both their imprecision in comparison with “hard” measures of input and real doubts about whether such measures are truly able to capture the most fundamental aspects of learning. From a management perspective, moreover, most colleges and universities are extraordinarily decentralized and consensual. Both factors have meant that it has been difficult to incorporate measures of institutional performance into the fabric of institutional decision making.

Emerging experience suggests that such information can nevertheless be of considerable value. Clear performance measures help render broad and obscure institutional goals more understandable to those charged with achieving them. Their development and use often expose hidden contradictions in goal statements as originally framed, because this process requires far more concrete definitions of how “success” will be recognized. As such, performance measures can serve as a common “organizational language” to align and mobilize action—as well as an unambiguous signal to both external clients and governing bodies that the institution sets clear directions for itself. Most importantly, indicators of performance can help establish and support a continuous process of self-correction at all levels of the organization—providing ongoing feedback about the particular aspects of institutional performance that need attention and an important tool for managers to direct resources toward improvement.

Experience has shown equally that the use of performance measures can encounter
multiple organizational pitfalls. Such measures, by their very nature, can be misinterpreted— especially if the data on which they are based are imprecise. Institutional and unit-level outcomes may vary, for instance, because of differences in context or setting that are outside management control—or through sheer statistical instability. Under such circumstances, “punishing” lack of performance is highly inappropriate. More perniciously, performance measures may create false incentives for action—inducing administrators to “manage the numbers” rather than fix the underlying problems that the measures may reveal. Finally, the information available may be simply too voluminous, or too complicated, to profitably digest and apply. Or it may
be too general to suggest what really needs to be done.

Despite these drawbacks, collecting and using information about results is no
longer a choice for most colleges and universities. Demands for concrete data about “return on investment” and to inform “consumer choice” will likely continue to increase, just as institutions themselves face rising challenges of managing effectively within heavy resource constraints. On the academic side, these data will prominently address learning outcomes, the success of graduates in obtaining employment, and their performance in further education. At the same time, they will increasingly emphasize “good practices” in instructional delivery—for instance, the use of particular curricular features or teaching techniques (collaborative learning, capstone course, or active-learning approaches) that are known to be effective. On the administrative
side, such measures will more and more reflect equivalent “service industry” practice— emphasizing client satisfaction, response and cycle times associated with key processes, costs, and the maintenance of core assets. In both areas, moreover, improvements in data-collection capacity and measurement precision will add quantity and quality to the store of data about performance available to every college and university.

But the challenge of putting such measures to work effectively remains formidable. Colleges and universities have much to learn from other enterprises that have followed similar paths—as well as from each other. The sponsors of this study recognized from the outset that there would be no easy answers, and indeed, even found basic questions at times difficult to frame. In the course of examining their own and partner practices, however, all involved uncovered lessons that they could use.

STUDY SCOPE

The following scope defines the content and structure of this benchmarking study. Sponsors spent a day-and-a-half with the project team from APQC and subject matter expert Peter Ewell, collaborating to create this scope.
Master Scope Statement: The project will concentrate on using performance measures to improve learning throughout the institution or organization.

I. Types of Performance
- Areas/domains of performance for academic institutions
- Cross-cutting dimensions of performance that apply to all kinds of organizations
- Specific methods of assessment used to gather information on these dimensions (and their strengths and weaknesses)

II. Organizational Uses/Applications of Performance Measures
- Informed institutional decision making
- Process improvement
- Achieving better “institutional alignment”—horizontally across units and departments, and vertically through organizational levels
- Managing communication with stakeholders and constituencies—internal and
external
- Overcoming organizational barriers to the use of performance indicators

STUDY KEY FINDINGS

This report has been organized into three macro topics:

1. origins, development, and characteristics of institutional performance measures;
2. use of performance measures; and
3. organizational cultures supporting the use of performance measures.
Within these three macro topics, nine key findings have emerged:
1. The best institutional performance measures communicate the institution’s core values.
2. Good institutional performance measures are chosen carefully, reviewed frequently, and point to action to be taken on results.
3. External requirements and pressures can be extremely useful as starting points for developing institutional performance measurement systems.
4. Performance measures are best used as “problem detectors” to identify areas for management attention and further exploration.
5. Clear linkages between performance measures and resource allocation are critical, but the best linkages are indirect.
6. Performance measures must be publicly available, visible, and consistent across the organization.
7. Performance measures are best considered in the context of a wider transformation of organizational culture.
8. Organizational cultures supportive of performance measures take time to develop, require considerable “socialization” of the organization’s members, and are enhanced by stable leadership.
9. Performance measures change the role of managers and the ways in which they manage.

BENCHMARKING METHODOLOGY

The past decade has seen wrenching reorganization and change for many organizations. As firms have looked for ways to survive and remain profitable, a simple but powerful change strategy called “benchmarking” has evolved and become popular. Benchmarking can be described as the process by which organizations learn, modeled on the human learning process. A good working definition is “the process of identifying, learning, and adapting outstanding practices and processes from any organization, anywhere in the world, to help an organization improve its performance.” The underlying rationale for the benchmarking process is that learning by example, from best-practice cases, is the most effective means of understanding the principles and the specifics of
effective practices.

The most important aspects of benchmarking are twofold: First, it is not a fixed technique imposed by “experts” but rather a process driven by the participants who are trying to change their organizations; and second, it does not use prescribed solutions to a problem but is a process through which participants learn about successful practices in other organizations and then draw on those cases to develop solutions that are most suitable for their own organizations.

Benchmarking is not copying, networking, or passively reading abstracts, articles, or books. It is action learning, as demonstrated in the description of the consortium methodology. Benchmarking is also not simply a comparison of numbers or performance statistics. Numbers are helpful for identifying gaps in performance, but true process benchmarking identifies the “hows” and “whys” for performance gaps and helps organizations learn and understand how to perform at higher levels.
READ MORE
Note: Product cover images may vary from those shown
2 of 4

Loading
LOADING...

3 of 4
- Sponsor and Partner Organizations

A complete listing of the sponsor organizations in this study, as well as the best-practice (“partner”) organizations that were benchmarked for their
innovation and advancement in measuring institutional performance outcomes.

- Executive Summary

A bird’s-eye view of the study, presenting the methodology used and the key findings discovered during the course of the study. These findings are
explored in detail in following sections.

Key Findings

An in-depth look at the nine key findings in three macro topic areas: origins, development, and characteristics of institutional performance measures; use of
performance measures; and organizational cultures supporting the use of performance measures. Organizational examples and quantitative data
provide supporting evidence for the findings.

- Partner Organization Profiles

Background information on the partner organizations, as well as a look at their performance measurement systems.
Note: Product cover images may vary from those shown
4 of 4
Note: Product cover images may vary from those shown
Adroll
adroll