“Composite” Ranking–The Most Accurate College Ranking Methodology

Each year both academia and industry alike make a big to-do about the new college rankings reports produced by popular ranking organizations and national publications such as U.S. News & World Report, Forbes, Newsweek, Wall Street Journal and Princeton Review. And each year aspiring college students, degree seeking professionals, as well as those in academia and industry, rely on these rankings to make important decisions.

A top three college ranking in Business Week can easily sway a student’s decision with respect to which business school to attend. A low ranking in a college ranking report can spur a college to focus on better student placement in order to show up higher in next year’s rankings.

Graduating from a middle-tier university versus a top ranked Ivy League institution may be a deciding factor in an employer’s hiring decision–and a college graduates employment opportunities. So are college rankings an accurate indicator of quality? It depends on how you define quality and value. Are college rankings important? The fact that people pay attention to them is what really makes them important.

Studies show that if you graduate from a top business school, you’re more likely to receive a higher starting salary than if you graduate from a lower ranked program. If you earn your degree at an Ivy League Law School you’re likely to receive two to three times as many job offers than if you were to graduate from a less reputable law school.

Some might interpret these findings in a way that would suggest that if you attend a top ranked college, or an Ivy League institution, you’ve got it made, and if you don’t, you’re going to struggle–but that’s not the case.

Notwithstanding the attention that college rankings garner, they’re still just rankings. Every organization that produces college rankings has their own set of guidelines, bias and ranking methodology. There isn’t any universality, standardization or consistency across the various publications as to how colleges are ranked. (Of course if there were, you’d really only need one college ranking report.) For example, the Washington monthly bases its college rankings in large part on the contribution a college, its faculty and its students make to our country, in the areas of service, research, and social mobility.

College rankings produced by the Princeton Review are based on survey feedback received from actual students and is a reflection of student opinion. Another ranking organization–the Wall Street Journal–bases its college rankings on several criteria including career services offered, quality of faculty, and overall student and staff satisfaction. However, all college rankings have one thing in common… they’re all very subjective.

There has been ongoing debate over the “correct” methodology for generating useful, accurate college rankings–and we predict the debate will continue for years to come. Due to the subjective, even inaccurate, methodologies used to generate college rankings, many colleges are now talking about opting out of the annual surveys used by U.S. News & World Report, Princeton Review and other ranking organizations that employ similar ranking methodologies–and several liberal arts colleges already have!

Another major problem with college rankings is that they’re rigid and just don’t account for so many intangibles associated with a quality college education. Most college rankings don’t account for the needs, character, and unique attributes of the individual. Just because you can get into MIT and MIT is a top ranked school doesn’t mean that MIT is the right school for you. There’s more to consider than prestige when selecting a college.

Our College Ranking Methodology

Our college ranking methodology is simple and straightforward–we select the best from the best! The editor compiled college rankings published on EducationCorner.com are a composite of the major college ranking reports/publications where each ranking report is assigned a weight based on how well their rankings approximate our definition of quality and value. Publications included in our composite rankings include U.S. News & World Report, Forbes, Wall Street Journal (WSJ), Business Week, Princeton Review, and to a lesser degree several other publications including, but not limited to, The Washington Monthly and Kiplinger.

By producing a composite rankings report, that incorporates all major ranking indices based on a weighted value, we’re able to weed out inconsistencies in rankings, eliminate bias, and provide a more conservative, accurate and complete view of one college’s real value relative to the value of other colleges in its category.

As we mentioned previously, one of the innate difficulties in producing accurate and useful college rankings is that there is no universally accepted definition of what a “good college” is exactly. Each major publication that produces college rankings develops their rankings based on a slightly different view of what it means to be a good college.

In order to produce composite college rankings that are meaningful, we must first identify which value propositions define the essence of a good college. Our composite rankings are based on the assumption that the measurable indicators of a good college are:

  • Post-graduate Success of Students (20%)
  • Retention/Graduation Rate (20%)
  • Reputation (15%)
  • Student Satisfaction (15%)
  • Academic Quality (10%)
  • Admissions Selectivity (10%)
  • Financial Resources (10%)

While there are many other criteria and intangibles students should consider when evaluating a college, when everything is said and done, students attend college to get the best education they can to (1) prepare for a career and (2) land a job when they graduate. If this isn’t accomplished, all else is for naught. The weights for each of the above metrics/indicators were assessed based on this definition of a “good college”.

Major college rankings used to develop our composite rankings are also assigned a weight based on how closely they come to approximating value based on our definition of a “good college”, as explained above. The major college rankings and there assigned weights are as follows:

  • U.S. News & World Report (30%)
  • Forbes Magazine (20%)
  • Business Week (15%)
  • Wall Street Journal (15%)
  • Princeton Review (10%)
  • Kiplinger (5%)
  • Washington Monthly (5%)

One of the difficulties in developing composite rankings based on other college ranking indices is that grouping and categorization of colleges vary from one index to the next. Additionally, some college rankings publications, such as Business Week, only rank certain types of schools (i.e., graduate business schools), or they limit their rankings to include a few select schools in each category. Only publications with college rankings with relevant categories are included in EducationCorner.com composite ranking calculations for a given category. For example, the EducationCorner.com composite business school rankings report includes data from U.S. News & World Report, Business Week, WSJ, Forbes and Princeton Review because each of these rankings offers a relevant business rankings category.

Major College Ranking Indices Included in Our Composite Rankings

The following are the major college ranking publications we include in our composite rankings along with an explanation of their respective ranking methodologies.

US News & World Report
For almost three decades, the U.S. News & World Report has published one of the most popular, reputable and comprehensive college rankings indices. U.S. News ranks colleges in a number of categories including National Universities, Liberal Arts Colleges, Regional Universities, Undergraduate Programs and Graduate Programs, among many others.

U.S. News uses various data points and qualitative measures to determine college rankings. They purport their rankings are based on the most reliable indicators of academic quality. Categorization of colleges ranked by U.S. News is based on the Carnegie classification, a widely accepted framework developed by the Carnegie Commission on Higher Education.

U.S. News says they use 16 “reliable” indicators of academic excellence to determine the relative value of each school’s rank. Each indicator is assigned a value (determined by their staff) and then each college is ranked based on their composite weighted score.

Schools that do not use SAT or ACT scores in their admission decisions for first-time students are not included in the U.S. News main college rankings. Schools with too few respondents to the peer assessment survey are also not included in college rankings. Schools with fewer than 200 students, too many nontraditional students, or those who don’t accept first year students are also excluded from rankings.

The data used for ranking computations comes from surveys provided by the individual colleges. U.S. News collects data for about 1,800 colleges each year. Other data sources include the National Collegiate Athletic Association (graduation rates), the Council for Aid to Education (alumni giving rates) and the National Center for Education Statistics (financial resources, faculty, SAT and ACT admissions test scores, acceptance rates, retention rates, and graduation and rates).

U.S. News also uses a variety of methodologies for ranking colleges in diverse categories such as law, graduate studies, engineering, etc.

U.S. News relies on the following value indicators to generate its general undergraduate college rankings, which represent a majority of its ranking reports.

  • Undergraduate academic reputation (22.5%) – represents the opinions of college presidents, provosts, deans of admission, high school counselors, college counselors and other peer groups with respect to a school’s undergraduate academic excellence. Each peer group is assigned a weight.
  • Retention (20%) – this indicator is based on the assumption that the higher the percentage of first-year students who return to school and eventually graduate, the better the school and its programs. This indicator is based on a six-year graduation rate (80% of the retention score) and the first-year retention rate (20% of the retention score).
  • Faculty resources (20%) – this indicator is designed to measure student satisfaction with school faculty and professors. Data points included within this indicator include class size, faculty salary, professor’s level of education, student-to-faculty ratio and percentage of faculty who are full time.
  • Student selectivity (12.5%) – represents the quality of students admitted to the college or university. This indicator has three components: admissions test scores, percentage of admitted students who graduated in the top 10% of their high school class, and the ratio of students admitted to applicants (acceptance rate).
  • Financial resources (10%) – represents the average spending per-student on instruction, student services, and other educational expenditures.
  • Graduation rate performance (7.5%) – assumes that the graduation rate of students is representative of the quality of a college’s programs and policies. This metric compares actual graduate rates to predicted graduation rates.
  • Alumni donation rate (5%) – reflects the percentage of alumni with a bachelor’s degree or higher who have donated to the school. The assumption is that alumni donations are a good indicator of students’ satisfaction with the education they received.

With respect to other college ranking methodologies, we feel that U.S. News college rankings are above average. However, we believe they are weighted heavily on measurements that are not aligned with several important value indicators of a college education. Specifically, U.S. News ranking negates several highly relevant, longer-term indicators of value including (1) career success, (2) contribution to society and (3) increased quality of life. U.S. News rankings fall short in all three areas.

Princeton Review
The Princeton Review publishes two annual colleges lists, (1) the Princeton Review College Rankings report and (2) the Princeton Review College Ratings report.

Each year Princeton Review publishes its 62 college rankings list. Ranking categories include Best Colleges, Happiest Students, Party Schools and Jock Schools, among many others. College rankings are based almost entirely on survey feedback received from 136,000 students attending the 380 ranked schools. The survey consists of 80 questions and is designed to assess each student’s opinion of:

  • Academics/administration
  • College life
  • Their fellow students
  • Themselves

The Princeton Review College Rankings report is based almost entirely on student experience and opinion, and it only ranks the top 20 schools in each of the 62 categories. While useful, and interesting, we feel the college rankings report produced by Princeton Review ignores several value indicators of a good college, such as career development, contribution to society, retention/graduation rates, reputation and admissions selectivity. It does however provide an accurate representation of student sentiment regarding each college–and again, most people find Princeton Review college rankings very interesting.

Princeton Review also produces a “Ratings” report. Unlike the Princeton Review College Rankings report, the ratings report is a bit more quantitative in nature. The ratings report ranks colleges based on:

  • Academics – measures study time outside of class, quality of student institution attracts, students’ assessments of their professors, student-to-faculty ratio, average class size, use of teaching assistants vs full-time professors, registration and student resources.
  • Admissions Selectivity – based on class rank, admission scores, and average high school GPA of entering freshman, percentage of students from out-of-state, and the school’s overall acceptance rate.
  • Financial Aid – measure of how much aid a school awards on a per student basis and how satisfied students are with the financial aid they receive.
  • Fire Safety – measures how well a school is prepared to prevent or respond to fires on campus.
  • Quality of Life – measure of how happy students are with their over college experience outside of the classroom.
  • Green – measure of how environmentally aware, friendly and prepared a school is.

So how valuable are Princeton Review College Ratings? It’s open for debate. The ratings offer a more quantitative based view of a college’s relative value, but the ratings include several measurements that we would argue are not a very good indicator of academic quality (i.e., Fire Safety, Green, etc.) And, notwithstanding any accuracy gained by including more quantitative value measures, the vast majority of people still prefer reading the Princeton Review College Rankings over the Princeton Review College Ratings–even if it’s simply to satiate their curiosity. Heck, who doesn’t want to know which school was nominated as the top Party School in the nation?

Every two years Forbes magazine produces its “America’s Top Colleges” lists which covers a number of categories including Best Business Schools in America, Top Midwest Schools, Top Public Colleges and Top Liberal Arts Colleges, among others. Forbes magazine, through a partnership with the Center for College Affordability and Productivity (CCAP) based in Washington D.C. ranks 650 colleges nationwide.

What do we like about how Forbes ranks colleges? They focus on “output” over “input”. They use metrics that measure actual ROI–the return students get from their investment in education. We like it! While their rankings aren’t perfect, in our opinion, they certainly incorporate a few more accurate indicators of value than other college ranking reports.

Data used to produce college rankings are gathered from various sources. There are 12 general factors used to calculate rankings, with each falling into one of five categories. These categories include:

  • Student Satisfaction (25%) – satisfaction metrics are based on evaluations from RateMyProfessors.com (7.5%), freshman-to-sophomore retention rates (15%) from IPEDSDepartment of Education database (IPEDS) and various other social media websites.
  • Post-Graduate Success (32.5%) – is based on alumni salary figures as reported by Payscale.com (10%), America’s Leaders List (22.5%) and a few other publications that rank successful college graduates.
  • Student Debt (25%) – this is an affordability metric that focuses on the cost of education and ability of graduates to pay back their student loans. It is composed of student loan debt load (10%), student loan default rates (12.5%) and predicted vs. actual percent of students taking out federal student loans (2.5%) to finance their education.
  • Graduation Rates (7.5%) – this value indicator is based on a four-year graduation rate and considers both the actual graduation rate (5%) and the actual vs. the predicted rate (2.5%).
  • Academic Success (10%) – this metric is based on student success as measured by student scholarships and fellowships (7.5%) and percentage of students who go on to earn a doctorate degree (2.5%).

However, Forbes computes a college’s overall annual ranking based on a moving average of the school’s ranking scores over the last three years.

What do we like most about Forbes College Rankings? In short, Forbes recognizes that the value of the education a college provides its students should be based in large part on the success of its students. This is where Forbes College Rankings stand out from the pack. Forbes believes if a college’s students are successful, then the college is successful. If its students aren’t successful, then the college isn’t successful. Makes sense to us.

Washington Monthly
Each year Washington Monthly publishes its Washington Monthly College Guide and Rankings report. Unlike other college guides and rankings publications, such as Forbes and U.S. News & World Report, that compute rankings based on the value that a college provides its students, Washington Monthly rankings are based on the contribution that a college makes to the country and to society, specifically in the areas of research, service and social mobility. Washington Monthly rankings are organized into five general categories: National University Rankings, Liberal Arts College Rankings, Master’s University Rankings, Baccalaureate College Rankings and Best-Bang-for-the Buck Rankings. Washington Monthly rankings are weighted equally across all three categories it measures: research, service and social mobility. It feels that each category is equally important and meaningful. Annual rankings are also based on a moving average of the last three years of data for each measure. This reduces dramatic year-to-year changes that may not be indicative of long-term trends or value.

Each of the three categories Washington Monthly measures include various components. The following is a breakdown of each category and its components.

Community Service

  • Size of schools Army, Air Force, and Navy ROTC programs relative to the size of the schools
  • Number of alumni currently serving in the Peace Corps
  • Percentage of work-study grant money allocated to community service projects
  • Combined score based on number of full-time staff involved in community service relative to various factors including total number of staff, size of school and academic courses that incorporate service.
  • Whether or not the school provides scholarships for community service.


  • Total amount of research spending
  • Number of PhDs awarded in fields of science and engineering
  • Number of faculty who received special awards and recognition relative to the total number of faculty employed
  • Number of faculty in national academies relative to the total number of faculty employed
  • Number of alumni who have gone on to complete a PhD in any field relative to the total number of alumni

Social Mobility

  • Percentage of Pell Grant recipients that graduate. (Washington Monthly has a long and complex system for calculating this measure.) This measure is believed to show how well a school is helping poor students to move up the economic and social ladder by graduating from college.

Washington Monthly has a separate ranking methodology to produce its annual Best-Bang-for-the-Buck Rankings. Unlike the other four ranking categories, the Best-Bang-for-the-Buck Rankings include all four-year colleges in a region, not just a smaller group of colleges that meet category specific performance criteria. The Best-Bang-for-the-Buck category is divided into Northeast, Midwest, Southeast, South and West ranking categories.

The five measures used in the computation of Best-Bang-for-the-Buck rankings include the following:

  • Student loan default rate
  • Graduation rate
  • Graduation rate performance. How does a college’s actual graduation rate compare to it’s predicted graduation rate?
  • Percent Pell Grant Students
  • Net price of attendance. This measure includes tuition, fees, room and board, and other expenses less grant aid recieved.

So how do we feel about Washington Monthly’s college rankings? We like them! Do we think the rankings provide a good relative measure of how good a college is for students? Not really. The rankings are informative, they provide insight and will appeal to certain groups of people–but we highly doubt they will sway students, employers or colleges in their decision making processes relating to college selection, hiring and resource allocation.

Wall Street Journal
WSJ is another publication that produces its own list of college rankings each year. WSJ produces rankings for a number of categories including Rankings by Major, Cream of the Crop, Top 25 Recruiter Picks, and Top Business Schools.

WSJ is unique among college ranking publications in that its methodology focuses almost exclusively on ranking colleges based on how well each school is able to help students land a job in key careers and professions.

WSJ works in partnership with Cambria Consulting to survey over 800 recruiters each year from the largest public and private companies, nonprofits and federal agencies in the nation across a myriad of industries. What does the survey produce? (1) A list of the colleges and universities that recruiters believe, based on their experience, produce the best-trained and educated bachelor-level graduates. (2) How many new graduates the recruiters hired the prior year from any colleges on the list. And (3) which school’s graduates were best trained/educated in each major listed on the survey–and from which they actively recruited.

While narrow in scope, and relatively simplistic, we really like the WSJ methodology for producing college rankings. At the end of the day, we go to college to (1) prepare for a career and (2) land a job when we graduate. If this isn’t accomplished, all else is for naught. The only major fault we find with WSJ rankings is that the rankings are limited to the top 100 colleges and universities. This really limits the usefulness of their rankings to support the research initiatives and decision making process of aspiring students, higher education institutions and employers. If the WSJ rankings included more schools in their rankings, it would be nice.

Business Week
Every two years Business Week produces its list of the best business schools in the nation. It ranks approximately 112 full-time MBA programs using the following three measures:

  • Student satisfaction (45%) – is based on the survey results of over 10,00 full-time MBA students attending 138 MBA programs across the nation. The survey addresses students’ opinion and satisfaction with career development, cultural aspects, and skill development.
  • Employer satisfaction (45%) – is based on the survey results of over 1,300 recruiters working at 614 companies. Recruiters are asked to assess the performance of schools’ graduates based on specific qualities including how effective they were on the job once they were hired.
  • Intellectual Capital (10%) – is based on the level of research expertise of schools’ faculty.

So how do BusinessWeek college rankings stack up to the competition? Really well, in our opinion. Obviously, Business Week rankings only focus on colleges offering graduate level business programs–but that’s okay. We think the value indicators used, and the respective weights they’re assigned to calculate rankings, provide a fairly accurate measure of value with respect to how well an MBA program is going to help students prepare for a career, land a good job and become successful.

Kiplinger produces what we consider a sound college ranking report based on pragmatic criterion such as affordability and academic quality.

Kiplinger starts with a list of 500 public four-year colleges and universities provided by Peterson’s/Nelnet. Using various academic quality measures–including admission and retention rates, graduate rates, SAT and ACT scores, and student-faculty ratios–Kiplinger narrows its list to about 120 schools. It then ranks each school based on cost and financial aid.

With respect to ranking calculations, the academic quality measure carries almost two-thirds of the weight, with cost and financial aid accounting for a third of the weight.

We don’t think the college rankings produced by Kiplinger are necessarily the best, but they’re certaining not the worst when it comes to assessing relative value of colleges based on common quality metrics.


It is our opinion that while attending a highly ranked college may increase the potential for a higher starting salary and generate more job opportunities, in many cases students who attend schools not found among the top ranked colleges on major college rankings indices may actually receive a better education and graduate from college more prepared to enter the job market and launch a successful career. It is also our opinion that college rankings, including those found on our website and other websites, should not be construed a final indicator of the value of a college or the programs it offers. Some not so highly ranked colleges offer some of the best programs in the nation. The complexities of higher education, including both the tangible and intangibles benefits, cannot be reduced to just one number or ranking.

See also: