20/09/2016
Measuring the Publishing Productivity of Economics Departments in Europe

Measuring the Publishing Productivity of Economics Departments in Europe

We measure performance on the basis of a publishing productivity index which allows to account for difference in research inputs among departments.

Views: 3169
Published in: Scientometrics

Rankings of academic departments are widely used by universities throughout the world as benchmarks to allocate efficiently their research funds to different departments, and further, as signals of high-quality education to attract or retain the most skillful and promising students and faculty. They are also used by academic departments themselves to define performance targets and shape optimal marketing strategies and further by academics and students when making their decisions on career advancements and investments in education, respectively. At aggregate level, rankings serve as informative policy instruments for national governments, as well as for country unions, in defining research budgets levels and optimally allocate them to domestic universities and country members, respectively. For instance, the development of Lisbon Agenda (2000) and the associated commitment of European Council (2005) to increase R&D funding in EU, were mainly triggered by the observed gap in leading-edge research between EU member countries and the U.S., as robustly evidenced by worldwide institutional rankings. 

In economic profession, there is a long tradition in ranking departments. Existing work commonly uses various measures of research output to rank departments. Laband (1985) used counts of citations to assess economics departments performance, while Yotopoulos (1961), and Niemi (1975) focused on number of articles published in top journals. Along the same lines, Yeager (1978) and Bairam (1978) considered total number of pages published in high-ranked journals. Recognizing that the quality of publications matters, Graveset al., (1982), and Scott and Mittias (1996) used AER-equivalent pages to adjust for journal-quality differences. Along the same line of argument, Conroy et al. (1994), and Dusansky and Veron (1998) looked also at AER-equivalent page counts using Laband and Piettes's (1994) updating of Liebowitz and Palmer's (1984) journal rank to weight journals. Similarly, Kalaitzidakis et al. (2003) provided a worldwide ranking of economics departments correcting further for biases arising from lagged journal weights and self-citations inclusions. There have been also rankings based on Ph.D. placements (Amir and Knauff, 2008) and averages of  ranks statistics (Coupe, 2003).     

Most of the studies highlighted above focus solely on research output measures to rank economics departments such as number of articles, article pages, citations or combinations of them. Needless to say, such measures lack important information on research inputs use and thus might be considered as inappropriate, especially when comparisons are to be made. For instance, published articles and subsequently citations are likely to be proportionally related to faculty size. Similarly, differences in research funds, research environment and other research inputs between departments are likely to explain observed differences in research output produced. Hence, adjusting at least for some sort of inputs variations between departments is a necessary prerequisite prior comparing actual departments performance in order to obtain meaningful rankings. 

The important dimension of research inputs has been considered only by a limited number of studies in the field. At micro level (department level), Conroy et al. (1995) and Scott and Mittias (1996) ranked economics departments in U.S. based on productivity performance as measured by output per faculty. Using NRC (1995) survey data, Thursby (2000) tested for differences in quality ratings between economics departments in U.S. accounting for faculty size, number of federal grants, and expenditures on library acquisitions. At macro level (country level), Kirman and Dahl (1994) and Kocher and Sutter (2001) provided aggregated country rankings adjusting for research inputs such as financial resources and population. Finallly, Kocher et al. (2006) adopted a DEA approach to compile a productivity-based ranking of OECD countries using country's R&D expenditures, number of economics departments, and population as research inputs. 

Three important observations can be drawn from the existing literature as reviewed earlier. First, most of the work in the field neglects to adjust for differences in research inputs among departments, producing therefore less informative rankings, inappropriate for comparison purposes. On the other hand, the few exceptional studies that do consider for research inputs variations focus exclusively on U.S. Second, the majority of studies are based on journals rankings constructed over a certain period of time that, more often than not, does not coincide with the corresponding period of departments rankings. This implies that journal weights used to adjust for quality differences in publications are likely to misestimate the true quality of the journals at the time of investigation and subsequently the  true performance of departments. Third, most of the existing work provides either university- or country-level rankings but does not combine them. It would be quite informative though to assess performance at both micro- and macro-level combining at the same time information from department and country rankings produced using the same methodology. 

In this paper, we assess the relative performance of economics departments in Europe using publication data in a core set of thirty-five top research journals in economics during the period 2007-11. Rather than focusing exclusively on output research measures, we assess performance on the basis of a publishing productivity index which allows to account for differences in research inputs among departments. The measurement of publishing productivity is based on counts of AER-equivalent articles per faculty using Kalaitzidakis' et al. (2011) updated journal weights computed over the same period with our study, overcoming thus any concerns associated with lagged-weights bias. Data on faculty size were obtained from an online search on departments websites at the time of investigation. Based on publishing productivity performance, comprehensive rankings are constructed at department level, as well as, at country level by aggregating research output and inputs of economics departments in each country. The distance of Greek economics departments from the top european departments is finally assessed. 

Department Of Economics Website

myEcon Newsletter

Join the notification list of the Department of Economics.