top of page

To Rank or Not to Rank? A Conversation With Amit Mrig, Founder and CEO of Academic Impressions

Welcome back! It has been a long summer... and the academic year once again is in full apogee. This pundit took the last two months off from blogging as I transitioned away from the university presidency.

I recently had the pleasure of speaking with Mr. Amit Mrig, founder and CEO of Academic Impressions, an organization which, for more than a decade, has been providing higher education professionals with practical, experience-based tools and information to help them succeed in an increasingly challenging fiscal, cultural, technological, political, and academic environment.

Our conversation was wide ranging and stimulating — Amit is a passionate advocate for improving higher education in the U.S., and ensuring access and success to the widest possible student population — as am I.

Part of our conversation centered on the (then) upcoming rankings to be provided by the federal government and the increasing emphasis on the use of rankings by many consumers. And so we asked - Are these rankings fair or are they biased? Are they an accurate measure of the value of a university? How about the quality of an education? And we concluded, emphatically, “No.”

To understand why we responded this way we should consider a few facts, and three important issues facing higher education today.

First, let’s set the national context:

  • In 2012, 20.6 million students were enrolled in an institution of higher education in the U.S., over 70% of them attending a public college or university.

  • The top ranked 100 schools (by the US News & World Report [USNWR] National Universities 2016 ranking) enroll slightly below 2.3 million students or about 9% of all enrolled students... indicating that over 90% of all students will not attend a top-100 university, despite the disproportionate share of the student body these universities have*

Now for the issues facing Higher Education today. Firstly, we should recognize a pressing need to increase the number of individuals with higher education (defined broadly). It is estimated that the jobs of the future will require that at least 60% of the workforce have some form of higher education by 2020, and perhaps as high as 75% by 2030.

Secondly, this needed increase in the number of graduates will not come from enrolling the usual suspects. Rather it will come from engaging and enrolling students who currently don’t think they can be students — under-represented minorities and the socio-economically disadvantaged — while ensuring that they succeed.

Thirdly, the decreasing affordability of higher education has actually resulted in a greater barrier to entry for many of the same students we are trying to attract.

So it’s fair to ask, if the nation and its people need greater quality affordable higher education, why then are the rankings we pursue not actually measuring this endpoint?

Let’s return to our primary questions. Are these rankings fair or are they biased? As Amit reminded me, “The ranking factors currently are input-driven much more than output-driven, which skews the results in favor of private universities, and away from those public institutions that provide the larger proportion of education in this country.”

To understand what he means let’s look at the ranking criteria generally in use. For example, reputation, student selectivity, faculty resources, financial resources, and alumni giving, together account for 70 percent of the weight in the USNWR rankings (see Box). Graduation/retention rates and graduation performance account for the remaining 30 percent.

So it stands to reason that a university with an already high ranking (reputation), that spends a lot of money, and that rejects the most applicants, will wind up at the top of the list.

“Think about it,” Amit said, “Reputation alone accounts for just under a quartile of the ranking — that makes it almost a self-fulfilling prophecy!”

How about measuring the value of a university’s broader purpose and societal value?

All universities — public and private, selective and non-selective — are recipients of tremendous government support through hefty subsidies, tax incentives, and direct investment. As a nation, we invest in higher education because of the broad and valuable benefits we receive as a society through research and innovation, job creation, economic development, and a more educated, productive and healthy citizenry.

In light of this, why is student selectivity so highly valued by USNWR? Bragging about who is not included seems to be one of humanity’s less admirable pastimes. Shouldn’t an institution that may not be the state’s flagship, but does an exemplary job educating local and regional students be more highly valued than one that excludes them?

Or as Amit reminds us, “Let’s ask how many students a university does serve well, rather than how many it doesn’t.”

Lastly, giving so much weight to dollars spent actually penalizes efficiency. At a time when average tuition is approximately 40% of a median family’s income, shouldn’t we be incentivizing colleges to operate at higher levels of quality and lower costs, passing along savings to students?

The fact is that the greater the resources, the greater the opportunity institutions have of being highly ranked. And this even applies to those institutions ranked as a “Best Value”, as these rankings strongly emphasize “net cost” (see and the size of the school’s endowment makes a huge difference in this calculation, since those schools with more resources can discount tuition more deeply and provide more “need based” aid. Are the rankings an accurate measure of the quality of an education?

Last year, the Gallup-Purdue Index Inaugural National Report surveyed over 30,000 graduates to better understand what students actually gain from a college experience and to explore the question of whether attending a certain type of institution provides better results than another. They sought to explore the question holistically, not looking at just employment rates and average earnings. They looked at measures of workplace engagement (employment rates, level of engagement at work), well-being (physical, social, community, purpose, and financial) as well as alumni attachment to their alma mater.

Their findings are clear — there is no distinction between graduates of public colleges versus private ones on well-being. There is no distinction between graduates of the top 100 USNWR ranked institutions and graduates of all other schools.

But as Amit noted, “the evidence suggests that the more loans a student takes, generally higher at private, selective institutions than publics, the worse off their well-being according to the Gallup-Purdue measures.”

A better model

A recent report from the CHEA International Quality Group suggested two actions to address the current flaws in university ranking: i) higher education should become more actively engaged in the global conversation about quality, and identify meaningful measures which can demonstrate value and contribution (a topic deserving of more discussion); and ii) higher education, along with key stakeholders, should agree upon a common international database to be held by a not-for-profit international organization.

As examples, the report proposed using such ranking alternatives as the U-Multirank , which embraces a greater diversity of institutions, and the U21 Ranking of National Higher Education Systems, which looks at the overall capacity of higher education to bring benefit to society.

While rankings are a natural reflection of our inherent competitiveness, lets just make sure that they measure what we actually need and want (even if sometimes we don’t know it).

*Consider that the USNWR top 100 universities account for only 2% of the over 4500 colleges and universities in the US today, but account for about 9% of all students enrolled in the U.S.

Many thanks to Beth Brigdon, VP of Institutional Effectiveness, Augusta University, for helpful data and comments.


bottom of page