Thanks. This covers predictive validity of team record, and it's exceptionally weak to the point of being pretty meaningless.
Perhaps. But, I'm skeptical. And surprised no available data seems to be run on this, given how widely accepted SOS is a conceptual framework for decision-making in FF circles.My best guess is that SOS is essentially meaningless until 4-6 games have been played in the current season. At that point you should have enough data to start determining a trend.
Even this can be flawed with players coming back from suspensions or injuries early in the year by Weeks 5-8 too. Through Week 6 of 2017, the Patriots looked disastrous against the pass and like obvious must-start opposing QBs but then Week 7 hit and they completely transformed only allowing Pittsburgh to top 245 yards through the end of the season.My best guess is that SOS is essentially meaningless until 4-6 games have been played in the current season. At that point you should have enough data to start determining a trend.
Great question, I am curious as well.Anyone have links to any analyses that establishes that these rankings have any utility? If so, are certain positions more durable/reliable than others?
I shared my own analysis of this years ago on FBG. It's been a long time, but my recollection was:Anyone have links to any analyses that establishes that these rankings have any utility? If so, are certain positions more durable/reliable than others?
Exactly the sort of thing that needs to be addressed. Some value could be gained by being aware of strength of defense if--and only if--our models for strength of defense are valid. If they are not valid, then there is no value in applying erroneous SOS data into an equation (or even a heuristic) to enhance or diminish a player's value/ranking. My takeaway from the limited analyses that have been conducted on SOS or SOD (defense) is that they are rubbish, which then suggests we should not use SOS to guide our rankings.I would think some value could be gained by being aware of the strength of defenses on a teams schedule and there are some folks who do projections based on each game for the whole season before the season has begun.
I wrote the post above this before seeing yours. This is in line with what I'm thinking and gives me a little hope we might salvage something from preseason SOS rankings. I'll take a look at Allen's analysis if I can find it later today. But, this is promising.Ian Allen at fantasy index did a piece on this a few years back. I have the article somewhere, not getting it now, but the bottom line is that it is virtually useless except at the extremes. I think it's came to the conclusion that if your opponents had a combined record of 137-119 you should downvalue a little. And if their record is 140-116 then it's even worse. On the flip side a record of 119-137 or 116-140 demonstrates a benefit to players. Everuthing else is negligible. He backed it up pretty well as I recall.
I have been curious for a while about how much the opposing defense really affects the offensive players. I had tried to start playing around with a while ago (scatterplotting fantasy points vs. defense rating, or something like that, by position and wasn't seeing a clear line), but that was very rudimentary. I would like to know how much of a factor the defense really is vs. individual talent, offense factors, weather, etc.Exactly the sort of thing that needs to be addressed. Some value could be gained by being aware of strength of defense if--and only if--our models for strength of defense are valid. If they are not valid, then there is no value in applying erroneous SOS data into an equation (or even a heuristic) to enhance or diminish a player's value/ranking. My takeaway from the limited analyses that have been conducted on SOS or SOD (defense) is that they are rubbish, which then suggests we should not use SOS to guide our rankings.
That's not to say that player values don't go up/down depending on quality of the opponent. I'm pretty sure they do. I just don't have confidence in our ability to accurately predict the strength of schedule during the preseason. Perhaps if we isolated only the two tails at the 10th percentile (i.e., whom we believe are REALLY GOOD or REALLY BAD), maybe there's better correlation there between what we predict and what actually happens...and then we could make adjustments for a limited number of players based on those data. But, I just haven't seen that hypothesis fleshed out in any good analyses yet.
Yes, "Pythagorean expectation" basically just means point differential. Let's say two teams both score exactly as many points as they give up. The Rams get a few breaks and go 10-6, the Cards are unlucky and go 6-10. From a PE perspective, they should have both gone 8-8, and there is no difference between the two. So if New England is going to play the Rams next season and Green Bay is playing Arizona, both NE and GB have the same SOS based on those opponents.Leroy Hoard said:Along the lines of zftcg's post above, there was an article that adjusts for point differential and that supposedly makes last year's records of opponents more accurate. I believe Vegas projections are based more on this model.
I think you pretty much have it with that last part. Start out with what they really think and then adjust to projected and current betting so as to lessen their risk while still maxing profit.Yes, "Pythagorean expectation" basically just means point differential. Let's say two teams both score exactly as many points as they give up. The Rams get a few breaks and go 10-6, the Cards are unlucky and go 6-10. From a PE perspective, they should have both gone 8-8, and there is no difference between the two. So if New England is going to play the Rams next season and Green Bay is playing Arizona, both NE and GB have the same SOS based on those opponents.
I'll admit I have no idea how Vegas calculate's over/unders for team wins. I had always assumed it was just a wisdom-of-crowds kind of thing, where if they lean too far in the wrong direction the sharps will hammer them back to an equilibrium. But maybe they use PE to make the initial calculations? I'd be curious to know how it works.
Actual numbers per this website: https://datashoptalk.com/nfl-predicting-2018-wins/I think you pretty much have it with that last part. Start out with what they really think and then adjust to projected and current betting so as to lessen their risk while still maxing profit.
Interesting. I'd never seen them all laid out like that before. Is it normal to have such a narrow distribution? Obviously it's highly unlikely that all 32 teams end up with between 6 and 10 wins.Actual numbers per this website: https://datashoptalk.com/nfl-predicting-2018-wins/
I just think the regression to the mean sometimes outweighs the far ranges that previous win totals predict. Cleveland won't go 0-16 again for instance.Interesting. I'd never seen them all laid out like that before. Is it normal to have such a narrow distribution? Obviously it's highly unlikely that all 32 teams end up with between 6 and 10 wins.
fantasyindex.com does a run on this topic every year, and basically implies that they're useless/meaningless for the most part..SOS for playoff weeks, that's something that might be important, but as a whole, SOS really doesn't matter..Anyone have links to any analyses that establishes that these rankings have any utility? If so, are certain positions more durable/reliable than others?