After 13 weeks, CFL rankings are still messed up

Hamilton has the best record of the rest, 6-6. They have been the most consistent from week-to-week. And over the last 5 games have the best record along with BC at 3-2. BC's two losses wasted good offensive effort with weak defensive efforts (48-35 loss to Calgary and 35-31 loss to Hamilton). Head-to-head Hamilton beat BC 2 weeks ago. Beyond that Hamilton has looked like a football team that can win some games, BC has looked like a train wreck. The bottom 5 aren't great but I think Hamilton is easily the best of that group.

The rest you ask over the same 5 games, Toronto 1-4, Winnipeg 1-4, Edmonton 2-3.

So I assume if the Riders beat TO tomorrow they will fall to 8th?

Say what?

Unless they figure out a way of putting the "future Ottawa" team in and then the Riders will be 9th.

One of the most oddball things about the CFL.ca's rankings is how high Winnipeg has been, despite losing so many games. But in the last week or so, three different media outlets, National Post, Globe and Mail, and Sun Media/QMI (SLAM! Sports) have all done stories that have written something along the lines of "best bad team in CFL history", and none of them have made any reference to CFL.ca's rankings.

The other glaring oddity is that Saskatchewan hasn't been in the top 4 since week 4, despite being in the top 3 in terms of wins. So maybe next week, after Sask wins and thus drops in the power rankings again, we'll see some articles about Saskatchewan being the "worst good team in CFL history".

this.. it was a good idea in theory, but executed terribly.

the purpose of a power ranking is kind of as a predictor for future games. so to argue against it based on the standings completely defeats the purpose of making one in the first place. it makes sense that the teams that do the things that supposedly contribute most to winning would be expected to win more often. however, using one year of data doesn't tell you anything.

Winnipeg and Edmonton are certainly doing the things needed to win. :lol: :lol: Mark Saturday Oct 30th on your calendar. Grey Cup preview. :lol:

Originally I thought this was a good idea too, in particular because I actually teach some of the techniques that were used in developing the system and they make a lot of sense (I'm a math guy, though, not a stats guy, so I'm not certain about some of the steps). But yeah, after a while, it's apparent that there's something wrong with the overall system. There are a few reasons why I think it's not giving us what we'd expect.

(1) There's too much variability in how the game itself is played. That is, two teams could put up roughly the same stats (i.e. QB rating, rushing yards, sacks, and missed field goals), but still put up widely different numbers of points. It's just like I can have a class average of 85% but have a lot people who still get below 65%. One team could play poorly but do well, while another team can play well and do poorly. In other words, the formula doesn't account for the "ugly win" or the "pretty loss". I felt this way about the 2nd Winnipeg @ Hamilton game. It looked to me like Winnipeg was actually playing better than Hamilton, but we ended up beating them (it was a very close game. Winnipeg nearly tied it up in the last seconds of the game).

(2) It doesn't take into account who teams are playing against. Team A and Team B can have the same stats (QB rating, etc.), but if Team A's opponent is better than team B's opponent, then the reasonable conclusion is that Team A is more "powerful" than Team B. The formula treats them equally. I think this could explain why Saskatchewan ended up going down while Edmonton went up.

(3) The formula only predicts the expected number of points. But there's more to a team's strength than its ability to rack up points (see (2)). If that were the case, then instead of using a formula, we could just use the average number of points that each team has scored up to the latest week and rank the teams that way. But I've never seen any other power rankings do that.

(4) I'm not sure why they're using a weighting of 25%(last game stats) + 75%(average of previous games) for their input. Is there a rationale for that other than 25 and 75 are nice looking numbers? (In teaching math, I've learned that there are nice numbers and ugly numbers. I can tell how nice a number is by the number of students that come to me with the right answer but think it's wrong). I'm also not sure why they only chose 4 stats. Did they sit down beforehand and say "Let's see what the four most significant stats are?" or did they say "What is the fewest number of stats we need in order to predict the number of points scored with a given accuracy?" If it was a conclusion of the process that 4 is enough, then I'm okay with it. If 4 was an arbitrary choice, made beforehand, then I'm not. But it's not clear to me from any of the articles they've written on the matter.

(5) The QB/passer rating. I don't think this should be used in the formula at all, since it's already a combination of 4 different stats. Those 4 stats should be evaluated individually. Perhaps, for example, yards per attempt (one of the stats in the passer rating formula) has a greater or lesser effect on the outcome of the game than the power ranking formula would suggest, but we don't know, because the yards per attempt stat is bound to the passer rating stat.

So I'm gonna go out on a limb and say that the problem might not be the formula itself, but rather the way in which the formula is being used, or more specifically, the fact that the formula is the only thing that's being used. In other words, the formula might be okay at estimating points scored, but there's more to power than just points.

I'm also not sure how much using data from more seasons would change the formula. One season gives 144 data points, which is apparently quite a lot, according to one of my stats-expert friends (although what constitutes "enough" data points depends on desired accuracy, among other things). I thought I read somewhere recently that more points per game have been scored on average in recent seasons than in previous ones. If that's actually true, then using too much data from too far back could actually result in a formula that underestimates the scores more than it overestimates (ideally, short of having a formula that gives you the exact number of points, which is impossible, you'd want a formula that underestimates as much as it overestimates). If the data were available in spreadsheet form (or some other easily harvestable electronic form), it wouldn't take long to check and see how much the formula would change if more seasons were considered (assuming we stick with the 4 chosen stats).