Published: June 11, 2015

The business of rankings: did the US News & World Report make substantial mistakes?

,

Here鈥檚 an easy task: choose between the following two schools for your child.

School #1 provides high-quality instruction and strong supports in order to academically accelerate students and challenge them to take difficult courses.

School #2 provides high-quality instruction but doesn鈥檛 push students to do more than what is normally expected.

If you鈥檙e an actual parent, you probably prefer school #1.

But if you鈥檙e US News & World Report, selecting the 鈥淏est High Schools鈥 in America, you apparently prefer school #2.

This outcome offers a cautionary tale about why it鈥檚 never a good idea to design a formula and start plugging in numbers before first understanding the system you鈥檙e analyzing. A seemingly small mistake can lead to big problems.

Those problems arose in , where three very different math tests were treated by US News as if they were equal, which resulted in its rankings penalizing schools that encouraged students to take on challenges.

Let me explain why I started researching this issue.

As a policy researcher, I focus on practices that can close opportunity gaps. In this context, I have long-term effects on the achievement of students at a diverse suburban school, the South Side High School in Long Island. It is one of the excellent schools that lost its high (鈥淕old鈥) ranking from US News because of the publication鈥檚 altered approach.

And why did that happen?

Here is how the rating process works

US News begins its rating process by calculating proficiency rates for reading and math on what it describes as 鈥溾 or 鈥渉igh school 鈥 (both terms are used).

In New York, these exit exams or proficiency exams are (or at least should be) what US News calls 鈥.鈥

But the publication then violated its own guidelines about which tests should be counted and decided to also include scores on optional, more difficult tests.

New York students can, in addition to taking the requisite Algebra exam, take one or both of two elective Math Regents exams, in Geometry and Algebra 2/Trigonometry. All three of these standardized math exams are a of the state鈥檚 assessment system. The optional tests are taken by fewer students because they serve a different purpose.

, the year that supplied the data for the current US News rankings, approximately 290,000 students took the Algebra exam, 162,000 took Geometry and 116,000 took Algebra 2/Trigonometry.

As the decreasing numbers suggest, these elective end-of-course tests are not exit exams or proficiency exams. They are substantially more difficult and are taken mainly by a self-selected subgroup of students attempting to achieve an advanced diploma. Yet these three math exams are all treated equally in the US News analyses.

Rankings give misleading information

To illustrate this approach and its inherent problem, imagine an evaluation of schools based on students鈥 high-jumping ability. Schools are ranked higher if more students clear a high-jump bar set at a minimum of four feet, a standard generally met by 80% of students in this hypothetical state. But if the bar is set at five feet, only 50% succeed; if it鈥檚 set at six feet, only 25% succeed.

Now imagine two schools:

School #1 pushes as many students as possible to not just clear the four-foot bar, but also to attempt jumps over bars set at five feet and six feet.

School #2 pushes only its best athletes to attempt these more challenging jumps.

Finally, imagine a ranking system that counts all jump attempts, regardless of where the bar is set. School #2 looks good in the rankings, with 80% success, but School #1 sees its ranking drop, due to all those unsuccessful attempts to clear higher bars, even though 80% of its students also clear the four-foot bar.

When US News counted results on all three math tests taken by a high school鈥檚 students, it penalized schools striving to achieve.

Two schools may both have a 95% pass rate on the Integrated Algebra test, but if School #1 convinces many of its students to take the more difficult courses and exams as well, then its overall rates will surely fall 鈥 and US News will consequently and misleadingly advise its readers that School #1 is worse.听School rankings tell only part of the story.

.

A scan of the New York results does, in fact, yield examples of schools designated by US News as 鈥淕old鈥 that, by key measurable criteria, have lesser results than schools with similar proportions of economically disadvantaged students, but that are not so designated.

These left-out schools have higher college readiness rates and higher rates of students awarded a Regents Diploma 鈥渨ith advanced designation鈥 (meaning they passed the most difficult math test, Algebra 2/Trigonometry).

This lapse is significant given the importance of the very US News rankings. Powerful constituencies in school communities pay attention, and district leaders hear from them.

The rankings are not low stakes; they therefore put in place a series of incentives and disincentives.

With strong enough incentives, people do nonsensical things

In sports, NBA teams will sometimes in order to better their lottery chances, and college teams will respond to incentives to schedule so-called opponents to improve their win-loss records.

Politicians will cater to the interests of donors and the even if those interests differ greatly from the preferences of the average voter.

Schools will set aside creative, meaningful lessons in favor of , in order to avoid 鈥淣o Child Left Behind鈥 sanctions.

Colleges and universities will create systems in order to increase their rating on the US News Best Colleges list. One measure used in the rankings is how many students choose another college. Early decisions simply don鈥檛 give admitted students the choice to go elsewhere.

to Paul Glastris and Jane Sweetland, authors of :

鈥淏y elevating selectivity, US News creates incentives for schools to game the system by raising admissions standards and accepting fewer students who are less prepared or from lower-income backgrounds 鈥 that is, the ones most likely to need extra help graduating.鈥

How I got involved

Coming back to the issue with the rankings in the case of New York鈥檚 South Side High School: in May of this year, the school鈥檚 principal, Carol Burris, coauthored a piece on the website that raised some of the problems with the rankings.

Burris had, back in 2013, convinced me to join her in a project to create an alternative way to recognize great high schools. She explained that while existing lists do identify many high-quality schools, the approaches underlying these lists generally fail to reward schools that are unsuccessful in enrolling high-scoring students.

South Side High School had long been by US News, including during the time when I conducted there, and Burris was certainly happy that her school was highly ranked. Moreover, I can attest, based on my research there, that the school is deserving of the honors it has received.

A long and consistent body of educational research stands for the basic tenet that great schools are those that student learning. No reputable research contends that schools become high-quality by enrolling the strongest students or by declining to encourage students to take the most challenging classes and exams.

Accordingly, we designed a project that we called , which recognizes schools for using research-based best practices and for challenging and supporting their entire student body 鈥 including those without rich opportunities to learn outside of school.

It is built on criteria set forth in the 2013 book, , which I co-edited with .

We piloted it this past school year in Colorado and New York, recognizing for their excellence. We plan to scale the project up next year, to recognize schools across the US.

Meanwhile, the ranking approach used by US News has led parents to wonder what happened to their communities' formerly excellent schools. Seeking answers, the principals who coauthored the piece contacted US News. This shows how they were rebuffed.

Also, when Valerie Strauss of the Washington Post asked US News to respond to the concerns raised about the high school rankings, they with the following statement:

鈥淭his test was used for all schools in New York 鈥 it was applied equally across the state. Because [our calculations] are relative, schools are being measured against each other. 鈥 We applied the data in the way that we outlined in our methodology and technical appendix and these rankings reflect that methodology. We are very clear in our methodology and technical appendix about what assessment data we used and how it was applied.鈥

Compare this explanation with the argument offered in favor of standardized testing, as just a week earlier in the satirical Onion newspaper: 鈥淓very student [is] measured against [the] same narrow, irrelevant set of standards.鈥

An unreasonable approach remains unreasonable even if it is applied equally to all schools and is openly explained in a methods section.

The Conversation is Professor, Educational Foundations, Policy & Practice at .


This article was originally published on .听.

Related Faculty: Kevin Welner