As a lot of regulars on this site know by now, since 2011 I've been trying to come up with a formula that can properly compile NFL statistics and return a value that describes how good a team actually is. The "catch" is that I ignore wins, because the underlying purpose of this now-three-season-long exercise is to weed out pretenders as soon as possible while uncovering contenders that may have gotten a slow start to their season. I went public with the rankings in 2012 (which led to my 'promotion' as a contributor for this site) and continued to try and fine-tune my process in 2013.
Of course, all of this becomes pretty pointless if I don't actually review the outcome of my system. In this post, I'll make a fairly light analysis of the results to see if they pass the "eye test." Part II will be much more technical from a statistics standpoint, and Parts III and IV will build off the observations made from these statistics. For starters, let's see how my rankings shaped up heading into the 2013 postseason.
Isn't This Where...
|1||Denver (2)||Seattle (1)|
|2||New England (7)||Carolina (5)|
|3||Cincinnati (8)||Philadelphia (10)|
|4||Indianapolis (9)||Detroit (14)|
|5||Kansas City (3)||San Francisco (4)|
|6||San Diego (12)||New Orleans (6)|
This is the playoff picture predicted by my rankings after every team played their full regular season schedule. The teams in bold-face italics were in the same seed in the real-life playoff picture. The number in parenthesis corresponded to their overall rank in my system, with the highest-ranked teams in each division getting seeded one through four in their conference and then the remaining highly ranked teams getting the wild cards.
In this instance, my chosen method of interpretation of the rankings (which I have been doing since I started this) corresponded almost perfectly to reality, with Detroit being the only exception. There is of course a lot of bias in that statement since how the teams were ranked in that chart was entirely dependent on how I saw it fit to do so; however, I will maintain that my interpretation makes the most logical sense since it emulates how the playoff teams are actually seeded (for now, at least).
One way of looking at the chart is to see how the teams did when they played each other in the postseason. According to Football Locks, there were four upsets in the playoffs: New Orleans over Philadelphia, Indianapolis over Kansas City, San Diego over Cincinnati, and Seattle over Denver. My chart above yielded two upsets, where an "upset" is defined as a lower-ranked team defeating a higher-ranked team. In this case, #9 Indianapolis beat #3 Kansas City and #12 San Diego defeated #8 Cincinnati. Other than that, all of the higher-ranked teams overcame their opponents.
Admittedly, the extremely small sample size here leaves a ton of room for error, and I will be trending this as the years go on to see how things shape up. I will say that I am pleased with how things started off as there appears to be a good correlation between the team rankings and their performances in the playoffs. But that is only one side to this. Was my system any good for making predictions in the regular season?
...We Came In?
One of my stated goals for this exercise is to see if there was any way to determine a team's ability to compete as early as possible - in other words, employ a systematic approach from separating the "pretenders" from the "contenders."
To see if the rankings actually accomplished this, I turned back the clock to see where the eventual playoff teams were at Week 4, which was the first week I began tabulating with the system. At that point in time, they looked like this:
The results are somewhat mixed, and there is more than one way to look at them. If you look at the record each team had at the time, it almost seems like the rankings are pointless because most of the teams had winning records anyway, and the ones that didn't weren't really predicted to make the playoffs at the time. The one exception here is Carolina, which had a losing record but was in the top ten in my system. If you recall, there was a lot of buzz about Ron Rivera being on the hot seat at that point in the season, but the Panthers stormed back, an event that was predicated by my system. So the rankings do seem to show when teams are doing things right, even if they run into some bad luck or hard times, but they can't show if teams are going to turn things around (for example, the 49ers ended up jumping eleven spots in Week 5 after they began their long winning streak). And to be fair, no amount of statistical analysis can predict when a team will "flip the switch," if that even happens at all.
Another way to interpret this is to see if there are any "relative" predictions - in other words, predicting division winners. This is something my system has been able to do reasonably well across seasons; it predicted that San Francisco would win the NFC West in 2011 by week four in Jim Harbaugh's rookie season. It has also consistently predicted division winners in hotly contested divisions, like the AFC North (back when the Steelers-Ravens rivalry was relevant). It's not an exact guarantee, of course, but it has been a noticeable trend. The italicized teams above were teams that were correctly predicted to win their divisions (Denver gets an asterisk because for a good portion of the season the Chiefs were the predicted division winners).
The Verdict (Part I)
Overall, the system appears to be doing a decent job, but it's still rough around the edges. For a ranking system that tries to predict success without factoring the most basic indicator of success (wins), it reasonably achieves its goal on multiple levels.
There's always room for improvement, of course. This is something I will be examining in Part II, where I analyze the statistics used to create the formula and comment on what ones are useful and which ones aren't. So stay tuned! The next part will go beyond the scope of these rankings and ultimately be a study of the sport of football as a whole.