-A A +A
October 4, 2019

To: Ross Romano, Ontario Minister of Training, Colleges and Universities

From: Charles M. Beach and Frank Milne

Date: October 4, 2019

Re: Unintended Consequences Abound in Post-Secondary Metrics

One feature of Ontario’s recent Strategic Mandate Agreements with individual universities and colleges is the planned use of performance metrics as a basis of future funding and enforcement of the new policies. Any implementation plan must have a sensible and balanced application of such tools.

Over the last three or four decades, there has been increasing use of metrics (statistical measures) of organizational performance. Services of most public-sector organizations are complex, and hard to capture in a few simple metrics.

Attempts to use simplistic metrics for incentives and rewards can thus easily produce unintended perverse outcomes. We examined several examples of the use and abuse of metrics in tertiary education as informed from experiences in Australia, the US and the U.K.

In the late 1980s and early 1990s, Australia adopted an incentive scheme for the “production” of undergraduates: universities were rewarded by the number of students they graduated. This one-dimensional scheme had very perverse incentives.

There were no incentives to promote quality. Instead, lower standards and graduating as many students as possible were rewarded. There are many indications that mass classes, group assessment and multiple-choice exams are reducing the quality of education for all students in the core disciplines. Australian academics and employers have been complaining over recent years, about the quality of recent incoming students and graduates, especially fee-paying foreign students.

The United Kingdom introduced periodic research rankings for universities and departments. These rankings counted the faculty research output, weighting articles by the prestige of the journal. These types of rankings again can introduce perverse incentives.

Just before the ranking period, departments could try to induce faculty with impressive CV’s to join their department. Hiring may be calculated to boost the department’s score, placing it in a higher category with increased funding. The new hire’s salary would be more than compensated for by the increased research funding reaped by the department. The result would be a disruptive churn in faculty for a discipline just before a ranking process.

If research rankings rely on simplistic metrics such as the number of journal articles and citation counts, fields that rely mainly on journals for disseminating research excel. Conversely, the rankings devalue disciplines (e.g., the humanities and some social sciences) that rely heavily on detailed monographs and books, requiring long periods of research and gestation, exploring topics in depth.

Teaching undergraduates and graduates requires great skill in imparting complex knowledge, encouraging independence of thought and the development of intellectual maturity. The skills can vary considerably across fields.

For example, there have been suggestions that the quality of teaching should be evaluated by the “value added” of any discipline or school. It is tempting to use market data on starting graduate salaries. But that data can vary widely across disciplines and across local regions.

Another example is the use of standardized student evaluations. Any teacher will be well aware of the limitations of these surveys. One can boost student popularity by easy grading and pandering to students. Conversely, compulsory technical courses are invariably unpopular and the instructor evaluations reflect student opinion.

The application of high professional standards, judgement at the departmental level, counselling and peer reviews are probably the best one can hope for improving teaching standards.

Basing salaries, promotion and other rewards on narrow metrics, directs academics toward the behaviour being measured, and away from other activities that may be highly valuable and yet hard to quantify.

For example, if collegiality and deeper scholarship are not rewarded, and the main emphasis is on narrow publishing metrics, then an academic department can degenerate into small academic silos pursuing the latest research fad or “hot topic” churning out a stream of incremental papers, with a dispirited fringe of adjuncts and graduate students teaching the undergraduate and basic graduate courses.  

A top-down, more centralized metric-based tertiary system also increases the opportunity for administrative burden.

There has already been an increase in “administrative expenditures” in the system. In Australia, which has gone considerably further down this road than Canada, the above concerns have also been reflected in the very high salaries of university vice chancellors (and their associated executives).

In summary, these issues are complicated, and due attention needs to be devoted to unintended consequences and perverse incentives that can follow from newly conceived policy.

Charles M. Beach is Professor Emeritus at Queen’s University and Frank Milne is BMO Professor of Economics and Finance at Queen’s University.

To send a comment or leave feedback, email us at blog@cdhowe.org.

The views expressed here are those of the authors. The C.D. Howe Institute does not take corporate positions on policy matters.