Saiu uma nova lista das melhores universidades do mundo, publicada pelo The Times Higher Education Supplement. Como sempre, Harvard University é a primeira, seguida de Oxford e Cambridge (of course). As 15 primeiras são todas dos Estados Unidos e Inglaterra, com a exceção da Universidade de McGill no Canadá. Da América Latina aparecem somente três: a Universidade Autônoma do México, na posição 74, e as universidades de São Paulo e Campinas, nas posições 175 e 177.
Para este ordenamento, a revista pesquisou a opinião dos pares e empregadores, a proporção de estudantes por professor, o número de citações por professor, e a internacionalização do corpo docente e discente. Em uma escala de 0 a 100, a pontuação mais alta da UNICAMP , 78, é no número de estudantes por professor , e a mais baixa, 16, é na internacionalização do corpo discente. A nota mais alta da USP é na avaliação dos pares, 65, e a mais baixa, também na internacionalização do corpo discente, 14.
Segundo a revista, “despite the presence of South African, Brazilian and Mexican institutions in this table, the overall message of these rankings is that the sort of universities we list here, mainly large, general institutions, with a mingling of technology specialists, are a dauntingly expensive prospect for any country, let alone one in the developing world”. “apesar da presença de instituições da África do Sul, Brasil e México nesta lista, a mensagem mais geral deste ordenamento é que o tipo de universidades listadas aqui, grandes, gerais, com uma combinação de especialistas de diferentes tecnologias, é um prospecto muito caro para qualquer país, e sobretudo para o dos países em desenvolvimento”. Mas é claro que não é somente uma questão de dinheiro.
Qual é a validade deste ordenamento? Se fosse um ranking oficial, de algum governo ou agência internacional, haveria muitas razões para questionar e criticar. Mas como é um trabalho jornalístico, ele vale como tal. Com suas limitações e possiveis tendenciosidades, esta lista nos diz coisas importantes, que só teremos a perder se não tomarmos em conta.
Hello Simon,
Thanks for sending this information. I brought the THES lists up and made an informal comparison of them with the Academic Ranking of World Universities, prepared annually by the Institute of Higher Education, Jial Tong University in the PRC. I also compared the criteria used by the two. (Besides this, I have taught and/or lectured in many of the better universities in the United States and Australia, as well as several in Brazil–not that one’s individual observations count for much. But a little personal observation is sometimes informative.)
The two ranking systems appear to be more or less in agreement on the rankings of the top dozen or so. After that they are far from agreeing. So what’s going on?
Each set of ratings depends on the criteria on which they were based. I happen to think that those of the THES don’t make a lot of sense. Those of the IHE seem quite good if what one wants is a rating of research universities. That’s my preference because I think that the generation of new knowledge ramifies into the other main functions of a great university: teaching undergraduate students, graduate students, and post-doctoral researchers, as well as public service. And it ramifies far beyond the confines of the university.
The generation of new knowledge is also superior to merely transmitting existing knowledge. This is because new knowledge builds upon existing knowledge and adds to it. So what was once new now becomes part of the changing body of existing knowledge.
For those who are interested, there is at least one publication in which the IHE system is discussed. (N.C.Liu and Y.Cheng, ‘Academic Ranking of World Universities–Methodologies and Problems’; HIGHER EDUCATION IN EUROPE: Vol. 30<No. 2, 2005). The corresponding author is Liu. His email address is given as follows: ncliu@sjtu.edu.cn.
Having said all that, I would like to see an additional analysis of the IHE’s criteria. The current implication seems to be that each criterion variable is a good but partial contributor to the measurement of an underlying phenomenon that could be called ‘university quality’, or maybe the ‘quality of research universities’. That’s a testable hypothesis, and it could be carried out by means of a factor analysis. Such an analysis would have the additional virtue of establishing the degree to which each criterion variable contributes to the measurement of the underlying variable (or variables, if the analysis shows that there are more than one.)
The THES system may also need such an analysis.
Thanks for bringing the THES’s system to our attention.
Best personal regards.
Archibald O. Haller, Editor POPULATION REVIEW Professor Emeritus of Sociology and Rural Sociology University of Wisconsin-Madison USA