David Yudkin
Footballguy
ONE TIME in the past 14 years. (That team was the 2003 Patriots.)
Any thoughts on why that is?
Any thoughts on why that is?
One reason is that the game changed for every with the start of free agency. Rosters turned over way more. That happened in 1994. I would have to look it up again, but I believe there was one other team that had the best record and won the title in the free agency area. So to answer your question, yes, I would say the earlier years would skew the stats.Why only go back 14 years? The other years don't skew the stats?
Most years there was a single team with the best record. In a couple of years, there were two teams with the same record (and in the same conference), so obviously the team that had the #1 seed would be considered the team with the best record (seeing how they would have hosted the game against the team they were tied against).I would have expected a slightly higher number (say 3 or so), but it's not altogether shocking if you consider it this way:The point of the playoffs is for teams to prove against top competition that they are the best. Every team plays a different opponents and the odds that the team with the best record is actually the best team are fairly low. The difference between 14-2 and 12-4 could come down to as little as a dropped pass and a blown call. When you throw in the parity/any-given-sunday nature of the NFL, where the best team (however you want to define it) can lose not only to other top teams, but even to bad teams, it's not as surprising.By the way, I didn't look it up, was there really a clear-cut "best record" each of the past 14 years? Seems like there must have been a few years where there was a tie at 13-3 or so.
I don't know why but the statistic is even more surprising when you consider that the playoffs have been expanded and the best team gets a first round bye. Or, maybe that is part of the problem? Maybe the extra week off, plus perhaps a week or two at the end of the year when they have it wrapped up, means that they lose their competitive sharpness?One reason is that the game changed for every with the start of free agency. Rosters turned over way more. That happened in 1994. I would have to look it up again, but I believe there was one other team that had the best record and won the title in the free agency area. So to answer your question, yes, I would say the earlier years would skew the stats.Why only go back 14 years? The other years don't skew the stats?
I don't have the numbers but I thought the four teams with the first-round bye won something like 75% of their games historically? I wonder how many "best record" teams in this period lost in the CCG or SB vs. the first round?I don't know why but the statistic is even more surprising when you consider that the playoffs have been expanded and the best team gets a first round bye. Or, maybe that is part of the problem? Maybe the extra week off, plus perhaps a week or two at the end of the year when they have it wrapped up, means that they lose their competitive sharpness?One reason is that the game changed for every with the start of free agency. Rosters turned over way more. That happened in 1994. I would have to look it up again, but I believe there was one other team that had the best record and won the title in the free agency area. So to answer your question, yes, I would say the earlier years would skew the stats.Why only go back 14 years? The other years don't skew the stats?
Fixed.ONE TIME in the past 14 15 years. (That team was the 2003 Patriots.)
Any thoughts on why that is?
Yes, Bye weeks started in 1978 when they switched to a 16-game schedule.In that time, the home/bye week team has gone 97-39 (.713).But over the past 7 years, the bye week teams have gone 15-13 (.536).How many years has the best team gotten a bye? since the expansion from 14 to 16 games correct? personally I think the bye is overrated.
And that could be because there are more franchise QBs now.'wildbill said:I went to pro-football-reference.com and looked back to 1990, the season they added a second wild card team to each conference.
In the 22 years since then the team with the best record has won the Super Bowl six times. But all six times happened in the first 14 seasons, and none in the last eight seasons.
The team with the best record has lost its first game seven times over those 22 years, but only twice in the first 14 seasons, the other five have been in the last eight seasons.
Teams with a bye have gone 64-24 (72.3%) in that same time. But they were 45-11 (80.4%) over the first 14 seasons, and only 19-13 (59.4%) over the last eight.
So it appears that the gap between the teams at the top and the teams just below them has gotten smaller over the last decade or so.
Maybe because those teams beat up on their divisions and padded their regular season stats against inferior opponents who didn't make the playoffs? Just a thought.ONE TIME in the past 14 years. (That team was the 2003 Patriots.)Any thoughts on why that is?
I went and ran the numbers. Since the advent of the bye weeks in 1978 . . .The teams earning byes had a .713 winning percentage.In league championship games in that time, the home team had a .667 winning percentage.In Super Bowls, the team with the better record had a .643 winning percentage (several games had teams with equal records, so I left those out).By my math, that would translate into a 30.6% chance the better team wins out to be Super Bowl champs.So in 15 seasons, the best team "should" have won 4.6-5 times. Granted, not a huge sample size, but a far amount worse than expected and what the numbers show.Yes, Bye weeks started in 1978 when they switched to a 16-game schedule.In that time, the home/bye week team has gone 97-39 (.713).But over the past 7 years, the bye week teams have gone 15-13 (.536).How many years has the best team gotten a bye? since the expansion from 14 to 16 games correct? personally I think the bye is overrated.
Right but in the last few season (7 or 8) the bye has not been a big advantage.Why is that?I went and ran the numbers. Since the advent of the bye weeks in 1978 . . .The teams earning byes had a .713 winning percentage.In league championship games in that time, the home team had a .667 winning percentage.In Super Bowls, the team with the better record had a .643 winning percentage (several games had teams with equal records, so I left those out).By my math, that would translate into a 30.6% chance the better team wins out to be Super Bowl champs.So in 15 seasons, the best team "should" have won 4.6-5 times. Granted, not a huge sample size, but a far amount worse than expected and what the numbers show.Yes, Bye weeks started in 1978 when they switched to a 16-game schedule.In that time, the home/bye week team has gone 97-39 (.713).But over the past 7 years, the bye week teams have gone 15-13 (.536).How many years has the best team gotten a bye? since the expansion from 14 to 16 games correct? personally I think the bye is overrated.
Thus why I started this thread.Right but in the last few season (7 or 8) the bye has not been a big advantage.Why is that?I went and ran the numbers. Since the advent of the bye weeks in 1978 . . .The teams earning byes had a .713 winning percentage.In league championship games in that time, the home team had a .667 winning percentage.In Super Bowls, the team with the better record had a .643 winning percentage (several games had teams with equal records, so I left those out).By my math, that would translate into a 30.6% chance the better team wins out to be Super Bowl champs.So in 15 seasons, the best team "should" have won 4.6-5 times. Granted, not a huge sample size, but a far amount worse than expected and what the numbers show.Yes, Bye weeks started in 1978 when they switched to a 16-game schedule.In that time, the home/bye week team has gone 97-39 (.713).But over the past 7 years, the bye week teams have gone 15-13 (.536).How many years has the best team gotten a bye? since the expansion from 14 to 16 games correct? personally I think the bye is overrated.
Fallacy of multiple endpoints, Texas sharpshooter fallacy, and clustering illusion.ONE TIME in the past 14 years. (That team was the 2003 Patriots.)Any thoughts on why that is?
In English please.Fallacy of multiple endpoints, Texas sharpshooter fallacy, and clustering illusion.ONE TIME in the past 14 years. (That team was the 2003 Patriots.)Any thoughts on why that is?
Fallacy of multiple endpoints: look at the data, find out which endpoints make the most compelling argument, present said data. If you're going to cherrypick your endpoints, you can make any case look compelling. For instance, I could say that home teams are 7-1 so far this postseason, so obviously HFA is huge... or I could say that bye-week teams are barely over .500 over the last 7 years, so obviously HFA is irrelevant. Why did I choose just this season or the last 7 years? Because those were the data sets that resulted in the most eye-catching statistics. When you look at the raw data and *THEN* select which span you're going to pay attention to, you're committing textbook multiple endpoints fallacy.Texas sharpshooter fallacy: this comes from the joke about the "Texas Sharpshooter". Basically, a guy fired a bunch of shots at the side of a barn, then drew a target around the biggest cluster of bullet holes and pronounced himself the best sharpshooter in Texas. The idea here is that, while any given data set might be unlikely to result from chance alone, if you look at enough data sets, it's unlikely that you won't find at least one that was unlikely to result from chance alone. Best illustrated by this comic. The idea here is that there are hundreds of different data sets (record of home teams, record of division winners, record of teams with a better record, record of teams with a bye, record of teams that entered the postseason on a losing streak, record of teams that entered the postseason on a winning streak, etc). In any given set, there's a result that is less likely... but the odds are, given the number of sets, that some of those "unlikely results" are going to show up. The question here shouldn't be "what are the odds of this particular unlikely event happening by chance", it should be "what are the odds, given the size of the data, that some particular unlikely event will wind up happening by chance". To illustrate the difference- shuffle a deck of cards. The odds of the cards winding up in that exact order are 8.06581752 × 10^67 (to put that number in perspective: it is estimated that there are 1.33 * 10^50 atoms on earth. Take that number and multiply it by a billion. Then multiply it by a billion again. That's how many different orders a deck of cards could find itself in). Now, I could shuffle a deck of cards and say "the odds of the cards winding up in this order are some remote, that this deck of cards proves *insert theory here*", but the truth is that, while each individual outcome is mind-bogglingly improbable, the odds of winding up with one of those improbable outcomes are a virtual certainty.In English please.Fallacy of multiple endpoints, Texas sharpshooter fallacy, and clustering illusion.ONE TIME in the past 14 years. (That team was the 2003 Patriots.)
Any thoughts on why that is?
In the 22 years I looked at the team with the best record or a team within one game of the best record won the Super Bowl 16 times. But five of the years where it didn't happen have come in the past seven years.When you think about it, what is really the difference between a 14-2 team and a 13-3 one? Not much - a lucky bounce here, a dropped pass there, maybe a missed chip shot field goal. You could even say the same for a 12-4 squad vs. a 15-1 one.
Wow, who let the brainiac into the shark pool.Fallacy of multiple endpoints: look at the data, find out which endpoints make the most compelling argument, present said data. If you're going to cherrypick your endpoints, you can make any case look compelling. For instance, I could say that home teams are 7-1 so far this postseason, so obviously HFA is huge... or I could say that bye-week teams are barely over .500 over the last 7 years, so obviously HFA is irrelevant. Why did I choose just this season or the last 7 years? Because those were the data sets that resulted in the most eye-catching statistics. When you look at the raw data and *THEN* select which span you're going to pay attention to, you're committing textbook multiple endpoints fallacy.Texas sharpshooter fallacy: this comes from the joke about the "Texas Sharpshooter". Basically, a guy fired a bunch of shots at the side of a barn, then drew a target around the biggest cluster of bullet holes and pronounced himself the best sharpshooter in Texas. The idea here is that, while any given data set might be unlikely to result from chance alone, if you look at enough data sets, it's unlikely that you won't find at least one that was unlikely to result from chance alone. Best illustrated by this comic. The idea here is that there are hundreds of different data sets (record of home teams, record of division winners, record of teams with a better record, record of teams with a bye, record of teams that entered the postseason on a losing streak, record of teams that entered the postseason on a winning streak, etc). In any given set, there's a result that is less likely... but the odds are, given the number of sets, that some of those "unlikely results" are going to show up. The question here shouldn't be "what are the odds of this particular unlikely event happening by chance", it should be "what are the odds, given the size of the data, that some particular unlikely event will wind up happening by chance". To illustrate the difference- shuffle a deck of cards. The odds of the cards winding up in that exact order are 8.06581752 × 10^67 (to put that number in perspective: it is estimated that there are 1.33 * 10^50 atoms on earth. Take that number and multiply it by a billion. Then multiply it by a billion again. That's how many different orders a deck of cards could find itself in). Now, I could shuffle a deck of cards and say "the odds of the cards winding up in this order are some remote, that this deck of cards proves *insert theory here*", but the truth is that, while each individual outcome is mind-bogglingly improbable, the odds of winding up with one of those improbable outcomes are a virtual certainty.In English please.Fallacy of multiple endpoints, Texas sharpshooter fallacy, and clustering illusion.ONE TIME in the past 14 years. (That team was the 2003 Patriots.)
Any thoughts on why that is?
Clustering illusion is a simple one. It's just the human tendency to see clusters and interpret them as patterns. Random is messy. When most people think of random, they think of alternating, but that couldn't possibly be further from random. Ask someone to pretend they flipped a coin 50 times and make up the results. Their "random" results will invariably look something like this: H, T, H, T, H, H, H, T, T, H, T, H, H, T, H, H, T, T, T. True randomness, on the other hand, is not alternating- it'll go on crazy streaks, it'll skew one way or the other, and it will almost certainly produce lots of clusters. People interpret those clusters as proof of a pattern. Quite the contrary- those clusters are instead proof of the absence of a pattern.
In short, while I would expect the team with the best record to average more than a 7% chance to win the SB over any given span, I remain highly skeptical of the idea that the fact that such teams only had a 7% chance to win over this particular span somehow speaks to any greater truth than "random is messy".