Week One Results can be weighted too heavily. Using football science, statistics, the spread and NFL history can help make sense of gambling and trends.


The first week of the pro football season is often chaotic. While the NFL is a league of parity, the opening games bring plenty of uncertainty. Players and coaches have changed teams, and rookies are playing in their first NFL contests. Some players have improved through off-season work, while others have begun to feel the effects of age. The first week of the season is just another week in the standings, but potentially carries with it significant informational value about what has transpired during the offseason. Put another way: How panicked should fans of the Eagles (10.5-point favorites as of this writing) be if the team loses to the Jaguars?

One way to examine the significance of a potential “Week One” effect is to review how teams fared in their openers and compare those results to expectations, as captured in the Team Totals. A Team Total is a bet offered where you can wager on a team to win over a given number of games, or under that number. For instance, the Patriots have a Team Total of 10.5, meaning you can bet on them to win more than 10.5 games, or less than 10.5 (Note: This doesn’t account for vig, a sportsbook’s advantage on each wager, but we’ll ignore that for now.) Drawing on a dataset of NFL closing lines spanning 1999-2013, which are derived from the Team Totals as of the first week of the season, we can take a examine how much Week 1 results can tell us about a team.

First, some background on the data we’re working with. We’re tracking 469 out of a possible 474 teams over those 15 seasons. Five teams were omitted from the dataset because there was no closing Team Total on them, such as the Colts’ . This helps bump the average Team Total to 8.1 (from 8), so let’s recalibrate our results to adjust for the fact that all the Totals are shaded a bit high. Team Totals also tend to be tightly bunched, with a standard deviation of 1.7 wins versus an observed wins standard deviation of 3.1 wins.

So, how much did the opening week influence the Team Total prediction by the end of the season? We would expect our Week 1 winners to be a bit better than average, so their expectations before the game should have been to win a bit more than half the time. Getting a full win should bump their expectation by a bit less than half a win. With a reminder that results do not add up evenly because of excluded teams with no Team Total, what we see largely lines up with this:

 

Team Total Actual Wins Difference
Winners 8.275 8.862 0.587
Losers 7.728 7.149 -0.579

 

Week 1 winners were predicted to be slightly better than average (although only slightly). After getting an opening win, they went on to beat expectations by 0.587 wins. That’s a bit better result than what we’d anticipate – the Week 1 winners were expected to win 8.275 games before the season, which works out to 0.517 wins/week. They actually won one game that week, so they beat expectations in Week 1 by 0.483 wins. The difference between the expected 0.483 wins and the observed 0.587 wins could be the size of the “informational” value contained in a Week 1 win. This effect is modest, however, as beating your expectation by 0.1 wins just isn’t much to get excited over.

What about the big favorites who were upset victims, or the big underdogs who pulled off upsets themselves? Surely there is information to be gleaned from the big “surprise” results. Using point spread data from the excellent resource , we can try and tease out a bigger effect.

Historically, teams favored by 5.5 points or more win approximately two thirds of the time or more (this will be explored further in a future piece). We have 88 such favorites and 87 underdogs during the 15-year period. Overall the data looks like this:

 

Team Total Actual Wins Difference
5.5+ Pt Faves 9.398 9.445 0.046
5.5+ Pt Dogs 6.843 6.403 -0.440

 

Our 5-point favorites are by and large pretty good teams, and most of our 5-point dogs are not very good (I suspect the underperformance of the 5-point dogs relative to their Team Total is noise, but maybe not). So what did it mean when these pretty good teams lost in Week 1?

 

5+ Pt Spread Average Spread Team Total Actual Wins Difference Info
Fave & Win -7.908 9.463 9.833 0.370 0.100
Fave & Loss -7.696 9.260 8.613 -0.647 0.083
Dog & Win 7.648 6.943 7.603 0.661 0.069
Dog & Loss 7.908 6.798 5.863 -0.935 -0.205

 

I’ve added two extra columns here. The first is for the average spread we’re looking at, because 5.5+ point favorites come in many categories. As you can see, the 5.5-point favorites who won tended to be slightly bigger favorites (by about 0.2 to 0.25 points on average) than the 5-point favorites who lost, and vice versa.

The second added column, titled “Info”, attempts to measure the informational value contained in a win or loss. For instance, our bucket of 5.5+ point favorites who won were expected to get approximately 0.73 wins heading into the game (based on the historical winning percentage of 5.5+ favorites since 1978). They actually won 1 game, so they beat expectations by 0.248 wins. With no informational value, we’d expect their season win total to increase by 0.248 wins. Yet they went on to beat their predicted Team Total by 0.370 wins – about 0.1 win better, and a similar size effect to what we saw before.

There are a few takeaways here. First, we get some pretty decent stratification. Winning versus losing your first game as a favorite or underdog was tied to a 1.2-win difference in Team Totals for your actual season win total as a favorite, and a 1.8-win difference as an underdog (although this is largely the result of superior/inferior team quality in the first place). Second, favorites who lost should expect to lose 0.73 wins off their Team Total, but actually only lost .647. In other words, while they didn’t get a win in Week 1, they went on to perform just fine – and actually a bit better than we’d expect with no informational value. Finally, the underdogs that lost underperformed by nearly a full win, and the informational value contained in that loss was double the size of any other effect we’ve seen.

What if we increase our spreads?

 

7.5+ Pt Spread Average Spread Team Total Actual Wins Difference Info
Fave & Win -10.200 9.860 9.926 0.066 -0.184
Fave & Loss -9.955 9.572 8.335 -1.237 0.487
Dog & Win 10.050 6.002 6.976 0.973 -0.223
Dog & Loss 10.200 6.239 5.461 -0.777 -0.027

First, a major caveat: This only represents 71 teams and just 21 upsets. There are simply not that many situations where favorites by more than a touchdown have lost in Week 1 over the last 15 years, so sample size is a factor.

However, what we see here is that big favorites who win go on to only modestly outperform their preseason expectation. Meanwhile big favorites who get upset may be in for some real worries, -dropping off from 9.6 projected wins to 8.3 – a steep decline that’s almost a half-win larger than simply that one game in the standings. A similar but smaller effect is seen for big underdog winners. Some real information may be conveyed there.

One final pivot point – what if we ignore wins and losses and just look at margin of victory? In some ways, a 5-point favorite winning by 30 is more impressive than a 5-point underdog winning by 1 (although it counts for the same amount in the standings):

To start, here’s a summary of all teams that didn’t push in Week 1, divided into groups that covered the spread and groups that did not:

 

N = 224 W% Average Spread Cover By Team Total Actual Wins Difference
No Cover 0.174 -0.413 -10.618 8.010 7.474 -0.536
Cover 0.823 0.361 -10.618 7.950 8.502 0.551

 

Not too much here. Both buckets of teams looked largely the same before the season (a predicted win total of about 8), and the average spread they were up against was fairly neutral. The teams that beat the spread at all, regardless of margin, won a full game more on the season than those that didn’t. However, almost all of that effect was simply the extra Week 1 win. There’s no real informational value here yet.

How about teams that covered by at least a touchdown? Let’s see:

 

N = 140 W% Average Spread Cover By Team Total Actual Wins Difference
No Cover 0.057 -0.639 -15.175 8.100 7.075 -1.025
Cover 0.943 0.535 -15.175 7.903 8.743 0.839

 

Teams that cover by at least a touchdown almost always win. Such teams are winning .839 games more than expected preseason, and only about half (.943 – .500) are simply the result of getting a win booked.

We start to see diminishing marginal returns for the teams that cover here around this point. Teams that cover by 10.5 or more look pretty similar.

 

N = 92 W% Average Spread Cover By Team Total Actual Wins Difference
No Cover 0.011 -0.060 -18.582 7.931 6.792 -1.139
Cover 0.989 -0.060 -18.582 8.022 8.882 0.861

 

However, the effect continues to increase for the teams that fail to cover. Teams that cover by 15:

 

N = 56 W% Average Spread Cover By Team Total Actual Wins Difference
No Cover 0.000 -0.009 -22.295 8.021 6.495 -1.526
Cover 1.000 -0.188 -22.295 8.074 8.898 0.824

 

And by 20:

 

N = 30 W% Average Spread Cover By Team Total Actual Wins Difference
No Cover 0.000 0.633 -26.300 8.315 6.544 -1.771
Cover 1.000 -0.633 -26.300 8.216 8.936 0.719

 

At this point, the teams that have covered by a lot have taken a small step back. But the teams that wildly underperformed gambling expectations in game 1, despite being predicted as better teams than those that blew them out, totally collapsed the rest of the season. They lost 1.77 wins from their season long expectation, only about a half-win of which can be explained by the loss itself. The other ~1.2 wins are additional informational value, a reduction in aforementioned uncertainty that led to this inquiry in the first place.

This result continues to increase, to the point where teams that failed to cover by 30 or more went on to win fewer than 4.5 games the rest of the season, despite being rated average teams before the season.

Margin of victory can be tricky in football, where running up the score doesn’t seem to tell you much. However, from this, it looks like you can’t tell too much about which teams are going to wildly exceed expectations by their Week 1 results, but you can get a pretty good idea of which teams are going to struggle the rest of the way.

Konstantin Medvedovsky writes about football science, both college football and the NFL.

Konstantin Medvedovsky

Konstantin Medvedovsky ("Kostya"), knows just enough about stats to get himself and readers into trouble. The rare times he does get his head out of a spreadsheet, he's likes playing cards, laying bricks on the court, and throwing elbows when the ref isn't looking.

Latest posts by Konstantin Medvedovsky (see all)