What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Strength of Schedule - QBs (1 Viewer)

cobalt_27

Footballguy
I know. We all subscribe to SOS. It's just another tool to use. When deciding between 2 "similar" QBs, the one with the "easier" schedule should get preference, right?

I don't buy it.

Anyone want to assert that it is a valid predictor with some statistical data? I presume we all have just assumed--because it sort of makes sense--that SOS helps in some way to predict outcome. But, if so, this should be a testable hypothesis, and I'd like to see something--anything--that validates its utility. All the statistical models I've run over the last 3 years with QBs (Passing Yds, Passing TDs) suggest that there is absolutely no predictive value, whatsoever.

I'd like to see someone defend the use of this (QBs only for the moment).

Carson Palmer faced one of the toughest passing schedules last year. The defenses he faced allowed an average of only 3070 passing yards (192/game) in 2004 (toughest 16-game schedule in the past three years). Of course, he threw for over 3800 yards.

Bledsoe had one of the easiest defensive schedules in 2003. The defenses he faced allowed an average of 3736 passing yards (233/game) in 2002. Yet, despite this cake schedule, and despite throwing for over 4300 yards the year before, he passed for only 2860 yards.

While these are just anecdotes, every regression and ANOVA model I run suggests that there's absolutely no reason to pay any attention to last year's performances when trying to assess the "strength" of this year's schedule.

So, someone please accept the challenge. How is SOS even defensible. Bonus points if you can substantiate it with a statistical defense.

 
I know. We all subscribe to SOS. It's just another tool to use. When deciding between 2 "similar" QBs, the one with the "easier" schedule should get preference, right?I don't buy it. Anyone want to assert that it is a valid predictor with some statistical data? I presume we all have just assumed--because it sort of makes sense--that SOS helps in some way to predict outcome. But, if so, this should be a testable hypothesis, and I'd like to see something--anything--that validates its utility. All the statistical models I've run over the last 3 years with QBs (Passing Yds, Passing TDs) suggest that there is absolutely no predictive value, whatsoever.I'd like to see someone defend the use of this (QBs only for the moment).Carson Palmer faced one of the toughest passing schedules last year. The defenses he faced allowed an average of only 3070 passing yards (192/game) in 2004 (toughest 16-game schedule in the past three years). Of course, he threw for over 3800 yards.Bledsoe had one of the easiest defensive schedules in 2003. The defenses he faced allowed an average of 3736 passing yards (233/game) in 2002. Yet, despite this cake schedule, and despite throwing for over 4300 yards the year before, he passed for only 2860 yards.While these are just anecdotes, every regression and ANOVA model I run suggests that there's absolutely no reason to pay any attention to last year's performances when trying to assess the "strength" of this year's schedule.So, someone please accept the challenge. How is SOS even defensible. Bonus points if you can substantiate it with a statistical defense.
:hifive:
 
While these are just anecdotes, every regression and ANOVA model I run suggests that there's absolutely no reason to pay any attention to last year's performances when trying to assess the "strength" of this year's schedule.
Exactly. That's why the only way to do this is to predict this year's performance.Pretty hard to do during training camp. I usually do this sort of analysis after the first month of the season so I can think about undervalued/overvalued players for trade. But I may try Gray's Ultimate SOS at some point this year -- at least he tries to predict 2006, though I'm not sure the basis of his predictions, and it hasn't been updated since June.

 
Last edited by a moderator:
I know. We all subscribe to SOS. It's just another tool to use. When deciding between 2 "similar" QBs, the one with the "easier" schedule should get preference, right?

I don't buy it.
Sorry cobalt_27, but it's important. And that's an unwavering, absolute "it's important."
Carson Palmer faced one of the toughest passing schedules last year. The defenses he faced allowed an average of only 3070 passing yards (192/game) in 2004 (toughest 16-game schedule in the past three years). Of course, he threw for over 3800 yards.
Carson Palmer threw for over 3800 yards for lots of reasons. He played every single game. His team likes to throw the ball. He's a tremendous player. He's got an excellent WR. He's got a very good offensive line, and another very good WR. His defense is porous, which keeps games close and Palmer gunning. Carson Palmer's schedule was supposed to be hard, based on what you knew of in 2004. It was hard...very hard. The fact that Palmer was able to do so well in spite of this is a testament to the earlier things I mentioned.
Bledsoe had one of the easiest defensive schedules in 2003. The defenses he faced allowed an average of 3736 passing yards (233/game) in 2002. Yet, despite this cake schedule, and despite throwing for over 4300 yards the year before, he passed for only 2860 yards.
Yes, Bledsoe passed for 1499 fewer yards. Bledsoe also lost Peerless Price (1287), Jay Riemersma (350) and Larry Centers (388) in that off-season. Losing over 2,000 yards in receiving options isn't a good way to maintain your passing stats. This topic was covered and widely predicted on these boards in the summer of 2003.
While these are just anecdotes, every regression and ANOVA model I run suggests that there's absolutely no reason to pay any attention to last year's performances when trying to assess the "strength" of this year's schedule.
So, someone please accept the challenge. How is SOS even defensible. Bonus points if you can substantiate it with a statistical defense.
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Anyone want to assert that it is a valid predictor with some statistical data? I presume we all have just assumed--because it sort of makes sense--that SOS helps in some way to predict outcome. But, if so, this should be a testable hypothesis, and I'd like to see something--anything--that validates its utility. All the statistical models I've run over the last 3 years with QBs (Passing Yds, Passing TDs) suggest that there is absolutely no predictive value, whatsoever.

I'd like to see someone defend the use of this (QBs only for the moment).
This is a bit more complicated. I've got the data, but I'll hold off for a moment to see where we're going.
 
Last edited by a moderator:
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)

 
Last edited by a moderator:
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.

 
joffer said:
But, since we all follow the SOS bible here,
news to me, but at least i know what i think now! :thumbup:
Precisely. You get flamed here if you even dare suggest that SOS is a weak measure. Of course, I have yet to see data to support its validity, which is why the regard for its Truth is somewhat perplexing to me--much like the bible.
 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
 
If you post a thread like this, it just hasn't worked for you or you haven't tried it. For those of us that have and has been successful... maybe coincidence... maybe not. But I know for sure I am NOT changing one thing I do!! Bring on the SOS!!

 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
A combination of four factors, which I've recently been told is probably more complicated than it's worth. The basic thing to do is get an average of team ranks in two categories: FP allowed and QB rating allowed. Then you go and adjust based on obvious stuff. Did Ray Lewis miss 8 games? Did Patrick Surtain sign with the team? Was Julius Peppers drafted by the team? Move up and down.One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.

 
Last edited by a moderator:
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
A combination of four factors, which I've recently been told is probably more complicated than it's worth. The basic thing to do is get an average of team ranks in two categories: FP allowed and QB rating allowed. Then you go and adjust based on obvious stuff. Did Ray Lewis miss 8 games? Did Patrick Surtain sign with the team? Was Julius Peppers drafted by the team? Move up and down.One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.
Ok, I'm with you now. One of the things you mentioned that I liked was using FP instead of a single metric (e.g., Pa_yds or TDs). I ran a few things in SPSS with FP and came up with some interesting data that lend support to some reliability from year-to-year.

But, I'm not sure if this answers the question about whether it's a good predictor when all the data are shuffled into a 16-game season.

 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
A combination of four factors, which I've recently been told is probably more complicated than it's worth. The basic thing to do is get an average of team ranks in two categories: FP allowed and QB rating allowed. Then you go and adjust based on obvious stuff. Did Ray Lewis miss 8 games? Did Patrick Surtain sign with the team? Was Julius Peppers drafted by the team? Move up and down.One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.
Ok, I'm with you now. One of the things you mentioned that I liked was using FP instead of a single metric (e.g., Pa_yds or TDs). I ran a few things in SPSS with FP and came up with some interesting data that lend support to some reliability from year-to-year.

But, I'm not sure if this answers the question about whether it's a good predictor when all the data are shuffled into a 16-game season.
Well it's not a good predictor if you're expecting it to say "tough predicted fantasy schedule = bad season for QB X." I think it's a decent predictor -- and absolutely better than a roll of the die -- if you're expecting it to say "tough predicted fantasy schedule = tough schedule for QB X." That means QBX deserves some sort of downgrade.
 
So, someone please accept the challenge. How is SOS even defensible. Bonus points if you can substantiate it with a statistical defense.
I was thinking something similar last week. I used Gray's SOS all last year. I was wondering if it is too narrowly focused so I ran a simple test. The data is at work but I can summarize.I took the SOS values for QB's from the preseason of 2005 and looked at the week 17 values. Then, I took the week 17 values after week 16 of 2005. The correlation between the two sets was around .32.I then simply calculated the FP's allowed for the passing defenses both at the end of 2004 and the end of 2005. After normalizing the data for the difficulty of the offenses they played, the correlation between the 2 sets was about .37.While that probably doesn't show that the 2nd way was statistically better, it showed me that without adding any subjectivity, it was just as good. My guess (and next test) is that total defense is more predictive than passing or rushing alone.I don't think this was exactly what you were looking for but it has got me thinking about how to come up with a more reliable and predictive SOS.
 
Well it's not a good predictor if you're expecting it to say "tough predicted fantasy schedule = bad season for QB X." I think it's a decent predictor -- and absolutely better than a roll of the die -- if you're expecting it to say "tough predicted fantasy schedule = tough schedule for QB X." That means QBX deserves some sort of downgrade.
Agree with all of that. What I'd like to quantify, though--what would REALLY sink some teeth into SOS--is a way of saying how much to upgrade or downgrade. If, for instance, we discovered that SOS is insignificantly related to outcome--or that it accounts for, say, only 2% of the variance--they why bother? Or, maybe it affects some positions differently than others. But, what if we found out that it accounts for 15% of the variance or something more substantial. Well, damn...that's something I'm really interested in incorporating into some of my projections.

I think we think it makes a difference...but I don't think we know that yet. And, if it makes a difference, I want to know how much.

Damn...I feel like I'm in a global warming discussion again.

 
Here's what I'm talking about...

* I used our league's FP, which accounts for yards, attempt/comp, tds, ints, etc. It runs the gamut.

* I calculated years 2004 and 2005--both in terms of how many points at QB a team scored (TeamQB), in addition to how many points each defense gave up to QBs (DEFvQB) for both those years.

* Then, comparing the average points that the schedule faced by each TeamQB to what they actually scored, you can just run a simple regression.

First, the correlation between TeamQBs in 2004 and 2005 is very good (.466; p<.01), so we can safely say that how a TeamQB does in 2004 is pretty reliably correlated to how they'll perform in 2005. In the model, we want to account for that bias.

When you throw it all into the regression, even when accounting for the productivity that each TeamQB tends to produce, no reliable association between SOS and actual performance was found. The correlation is insignificant at .15 (p=.207) and SOS accounts for only .019 (or 2%) of the variance.

Say that again: We're getting all hot and bothered by 2% of our scores being explained by SOS. And, it isn't even a reliable association, so we really can't even hang our hats on that.

I can't think of any other way to manipulate this to get the supposed effect. So, please throw out ideas, if they come up.

I simply think the theory of SOS is too simplistic to account for how dynamic the year-to-year changes can be across 32 teams and a 16-game schedule. I know this is akin to saying the world is round, when everyon knows it's flat around these here parts, but...

I don't think the data lend support to SOS as a valid predictor of success. Not for quarterbacks, at least.

 
Last edited by a moderator:
For good measure, I ran the same analysis comparing 2004 SOS with 2004 QBs, and the correlation was almost significant (r=.282; p=.059). For good measure, let's just say the two were correlated.

What we really want is a measure of effect size. And, SOS still tanks when it comes to the effect it has on points (1.7%).

If this is enough to blow your dress up, by all means, live it up. But, if we put a series of 2 defenses side-by-side, and I took one of them based on SOS, while another guy took them based on a coin flip, the data suggest that he's going to do no worse than I do in picking the better outcome.

 
SOS is completely invalid as a difference-maker.

There are many more x-factors that weigh much more.

SOS should be viewed more as a guideline for tweaking your expectations slightly, not being the primary determinant in who you select among similar-looking players.

If the guys look that similar, and SOS is the only way to break it, you have not done your research.

 
SOS should be viewed more as a guideline for tweaking your expectations slightly, not being the primary determinant in who you select among similar-looking players.
And, while I would probably still do this myself...I think it's ultimately fool's gold.
 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
A combination of four factors, which I've recently been told is probably more complicated than it's worth. The basic thing to do is get an average of team ranks in two categories: FP allowed and QB rating allowed. Then you go and adjust based on obvious stuff. Did Ray Lewis miss 8 games? Did Patrick Surtain sign with the team? Was Julius Peppers drafted by the team? Move up and down.One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.
Ok, I'm with you now. One of the things you mentioned that I liked was using FP instead of a single metric (e.g., Pa_yds or TDs). I ran a few things in SPSS with FP and came up with some interesting data that lend support to some reliability from year-to-year.

But, I'm not sure if this answers the question about whether it's a good predictor when all the data are shuffled into a 16-game season.
Well it's not a good predictor if you're expecting it to say "tough predicted fantasy schedule = bad season for QB X." I think it's a decent predictor -- and absolutely better than a roll of the die -- if you're expecting it to say "tough predicted fantasy schedule = tough schedule for QB X." That means QBX deserves some sort of downgrade.
Agreed Chase.If I'm looking at two players who appear dead even to me, in terms of talent, surrounding talent and injury risk, I'll look to SOS to break the tie. I'm not going to put much stock into SOS, but in the rare (maybe twice in a draft) event of a dead even tie, I'll use SOS.

There are two players you view equally. Player A is playing the Bears, Steelers, Bucs, Jags, Ravens, Carolina, Colts, and is in a tough division. Player B is in a soft division and is playing Houston, San Fran, St. Louis, Buffalo, Tennessee, New Orleans, Arizona, Oakland, and the Jets. Seeing their talent, injury risk, and surrounding talent as equal, who would you rather have? Like I said, it might not happen often but once or twice a draft is enough for me to use SOS.

 
When you throw it all into the regression, even when accounting for the productivity that each TeamQB tends to produce, no reliable association between SOS and actual performance was found.
I haven't read the rest of your posts in this thread, but this was important enough to cause me to stop. There isn't -- nor should there be -- a reliably significant association between SOS and actual performance. Just like there isn't -- nor should there be -- a reliably significant association between QB actual performance, and say, how many points that QB's team allows a year.But that's not to say PA is irrelevant (which it may or may not be). But Alex Smith sux0r. And if he's on the 49ers or the Bears, he's going to be sux0r. Peyton Manning is good. And if his defense allows no points or tons of points, he's still going to finish top five. (Note the past two years).No one has ever claimed that there's a strong correlation between SOS and actual performance. But that's hardly the same thing as saying SOS is useless. To make the point blatantly obvious, let's assume the following five team league.QBA is on TMA, QBB is on TMC...QBE is on TME.QBA is the best QB, QBE is the worst, and the others fall right in line. Let's give them ratings of 90, 80, 70, 60, 50.TMA's schedule is the hardest. TME's schedule is the easiest. Let's rate the schedules as +8, +4, 0, -4, and -8. TMA has a schedule of -8. TME has a schedule of +8.At the end of the year, QBA scores an 82. QBB scores a 76. QBC - 70. QBD - 64. And QBE a 58. It's pretty clear that the strength of schedule affected things. QBA went from being 40 points better than QBE, to just 24 points better. That difference can be attributed to SOS.If you were to run the correlation coefficient between SOS and QB performance, you'd get a CC of -1. That means a perfectly negative relationship -- as SOS goes up (i.e., gets easier), QB performance goes down (i.e., gets worse). Of course, this is the exact opposite of what actually happenned. That's why you don't compare these two variables.
 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
A combination of four factors, which I've recently been told is probably more complicated than it's worth. The basic thing to do is get an average of team ranks in two categories: FP allowed and QB rating allowed. Then you go and adjust based on obvious stuff. Did Ray Lewis miss 8 games? Did Patrick Surtain sign with the team? Was Julius Peppers drafted by the team? Move up and down.One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.
Ok, I'm with you now. One of the things you mentioned that I liked was using FP instead of a single metric (e.g., Pa_yds or TDs). I ran a few things in SPSS with FP and came up with some interesting data that lend support to some reliability from year-to-year.

But, I'm not sure if this answers the question about whether it's a good predictor when all the data are shuffled into a 16-game season.
Well it's not a good predictor if you're expecting it to say "tough predicted fantasy schedule = bad season for QB X." I think it's a decent predictor -- and absolutely better than a roll of the die -- if you're expecting it to say "tough predicted fantasy schedule = tough schedule for QB X." That means QBX deserves some sort of downgrade.
Agreed Chase.If I'm looking at two players who appear dead even to me, in terms of talent, surrounding talent and injury risk, I'll look to SOS to break the tie. I'm not going to put much stock into SOS, but in the rare (maybe twice in a draft) event of a dead even tie, I'll use SOS.

There are two players you view equally. Player A is playing the Bears, Steelers, Bucs, Jags, Ravens, Carolina, Colts, and is in a tough division. Player B is in a soft division and is playing Houston, San Fran, St. Louis, Buffalo, Tennessee, New Orleans, Arizona, Oakland, and the Jets. Seeing their talent, injury risk, and surrounding talent as equal, who would you rather have? Like I said, it might not happen often but once or twice a draft is enough for me to use SOS.
Not to bust on your TLD, but this is a good example. The Jets ranked 2nd in FP allowed last year to QBs. You would not have wanted your QB to play the Jets last year. I think, in general, disinterest more than anything else, leads to defenses being unpredictable.
 
We got into this pretty good at the FBG Retreat this year. I was in one camp, defending SOS, and all the smart staffers on this site were in the other. We agreed on a few basic things though.

Let's say QBGOOD averages 25 FP/G. And QBBAD averages 15 FP/G. And DEFGOOD allows just 15 FP/G to QBs. And DEFBAD allows an average of 25 FP/G to QBs. The league average QB scores 20 FP/G.

If you were equally confident in those numbers -- that QBGOOD is actually a 25 FP/G guy, and DEFGOOD is actually a 15FP/G defense -- and for this purpose, we can just stipulate that all the numbers are perfectly representative...then I think we're all in agreement that we'd expect QBGOOD to score 20 FP against DEFGOOD, and QBBAD to score 20 FP against DEFBAD. And there's no advantage in having the best QB against the best D over the worst QB against the worst D.

The problem of course, is that we don't know defenses as well. You might use the word unpredictable (that's what the smart staffers say), or you might say we're just uneducated about them (that's what I say). We know lots about Culpepper and Manning, and we can say what their true value is. If a defense has two good weeks we think they're great all of the sudden. When they flop the next week, we say unpredictable.

Anyway, let's go back to the projections and QBGOOD and QBBAD. Now we won't stipulate anything, but just go on what we know. The smart staffers think instead of dropping QBGOOD by 5 FPs when he plays DEFGOOD, they'd drop him maybe .5. And bump QBBAD 0.5. There should be a change, for sure, but the factor of ten illustrates that we're just not very confident about our defensive projections. We're going to discount the bumps up or down that we know should apply, based on the little information we have. In theory, this is both accurate and wise.

Figuring out what that discount factor is, however, is not easy. I think it's a bit higher than 10%, but like I said, there aren't many opinions I respect more than the FBG guys that were all about the 10%.

 
Not to bust on your TLD, but this is a good example. The Jets ranked 2nd in FP allowed last year to QBs. You would not have wanted your QB to play the Jets last year. I think, in general, disinterest more than anything else, leads to defenses being unpredictable.
I wasnt necessarily talking about QBs. I meant any two players that are even. I just named the worst scoring DEFs from last year. And even if it was QB i was looking at, I would still take Player B who has to play the jets once than player A who has 11 tough games(if you count the "tough divison" games). But like I said, I rarely use it and its a last resort. Like in the World Cup, for examle. In advancing out of group play the last tiebreaker is picking out of a hat. It's just there to break the tie.
 
Last edited by a moderator:
When you throw it all into the regression, even when accounting for the productivity that each TeamQB tends to produce, no reliable association between SOS and actual performance was found.
I haven't read the rest of your posts in this thread, but this was important enough to cause me to stop. There isn't -- nor should there be -- a reliably significant association between SOS and actual performance. Just like there isn't -- nor should there be -- a reliably significant association between QB actual performance, and say, how many points that QB's team allows a year.

But that's not to say PA is irrelevant (which it may or may not be). But Alex Smith sux0r. And if he's on the 49ers or the Bears, he's going to be sux0r. Peyton Manning is good. And if his defense allows no points or tons of points, he's still going to finish top five. (Note the past two years).

No one has ever claimed that there's a strong correlation between SOS and actual performance. But that's hardly the same thing as saying SOS is useless. To make the point blatantly obvious, let's assume the following five team league.

QBA is on TMA, QBB is on TMC...QBE is on TME.

QBA is the best QB, QBE is the worst, and the others fall right in line. Let's give them ratings of 90, 80, 70, 60, 50.

TMA's schedule is the hardest. TME's schedule is the easiest. Let's rate the schedules as +8, +4, 0, -4, and -8. TMA has a schedule of -8. TME has a schedule of +8.

At the end of the year, QBA scores an 82. QBB scores a 76. QBC - 70. QBD - 64. And QBE a 58. It's pretty clear that the strength of schedule affected things. QBA went from being 40 points better than QBE, to just 24 points better. That difference can be attributed to SOS.

If you were to run the correlation coefficient between SOS and QB performance, you'd get a CC of -1. That means a perfectly negative relationship -- as SOS goes up (i.e., gets easier), QB performance goes down (i.e., gets worse). Of course, this is the exact opposite of what actually happenned. That's why you don't compare these two variables.
Chase, all this demonstrates is that it's a good theory. You set it up well, lobbed it in, and knocked it clear out of the ballpark.All I'm positing is that the reality doesn't work in such a clearly defined space that you've articulated here. And, while your hypothetical does dramatic things based on SOS, the data give you nothing better than a coin flip. I know you've said that you're not trying to prop up SOS as anything with a strong correlation to performance. But, if there isn't even a small correlation in the real world, what use is it? You seem to think it still has value. I fail to see where that value lies, other than in the world of QBA, QBE, TMA, and TME, which doesn't exist in reality.

At the edges of the distribution, and in individual cases, no doubt: SOS matters. (I prefer to call it SOD--Strength Of the Day--to emphasize that on a case-by-case basis, there will be matchups with higher predictive value than the entire distribution.) But, what you guys do repeatedly each year is summarize each position's season in terms of it's strength; good v. bad; hard v. easy. You attach a season number to summarize that season's difficulty. And, it's cited and borrowed in numerous forms, whether it be articles or posts on messageboards. What I'm suggesting is that the variability is so minimal when you start pooling the data like this, you get nothing meaningful out of it. Not out of that summary score. It's a useless digit that suggests something is there when it's not.

Let me take two steps back: In reality, the strength of a season schedule has ~11% effect on performance (just ran a few quick analyses), which is very good and very significant. But...this is when you analyze it with 20/20 hindsight. This is when you compare, say, a QB's 2005 performance with the quality of the 2005 defenses he faced and their performance.

That's not, however, what preseason SOS is being used for or advertised. Instead, what SOS advocates want to do is take 2004 data and extrapolate that information into 2005 predictions--and that is simply not a valid means to assess how strong or how weak a season looks for a QB. Again, individual situations on the extreme will likely hold strong predictive power. I trust that, based on last year alone, Peyton Manning will outperform Alex Smith, just as I trust that Pittsburgh's Defense will outperform the 49ers Defense. I need to look no further at the 2004 data to make this reasonable assumption. But, there are only a handful of examples with which I can do this. And, those examples add very little when looking across a 16-game season among 32 teams.

So, I return to my original point which is the SOS data can be useful in small circumscribed situations. But it is much--MUCH--more limited in what it can tell us than most people think.

 
When you throw it all into the regression, even when accounting for the productivity that each TeamQB tends to produce, no reliable association between SOS and actual performance was found.
I haven't read the rest of your posts in this thread, but this was important enough to cause me to stop. There isn't -- nor should there be -- a reliably significant association between SOS and actual performance. Just like there isn't -- nor should there be -- a reliably significant association between QB actual performance, and say, how many points that QB's team allows a year.But that's not to say PA is irrelevant (which it may or may not be). But Alex Smith sux0r. And if he's on the 49ers or the Bears, he's going to be sux0r. Peyton Manning is good. And if his defense allows no points or tons of points, he's still going to finish top five. (Note the past two years).No one has ever claimed that there's a strong correlation between SOS and actual performance. But that's hardly the same thing as saying SOS is useless. To make the point blatantly obvious, let's assume the following five team league.QBA is on TMA, QBB is on TMC...QBE is on TME.QBA is the best QB, QBE is the worst, and the others fall right in line. Let's give them ratings of 90, 80, 70, 60, 50.TMA's schedule is the hardest. TME's schedule is the easiest. Let's rate the schedules as +8, +4, 0, -4, and -8. TMA has a schedule of -8. TME has a schedule of +8.At the end of the year, QBA scores an 82. QBB scores a 76. QBC - 70. QBD - 64. And QBE a 58. It's pretty clear that the strength of schedule affected things. QBA went from being 40 points better than QBE, to just 24 points better. That difference can be attributed to SOS.If you were to run the correlation coefficient between SOS and QB performance, you'd get a CC of -1. That means a perfectly negative relationship -- as SOS goes up (i.e., gets easier), QB performance goes down (i.e., gets worse). Of course, this is the exact opposite of what actually happenned. That's why you don't compare these two variables.
Instead, what SOS advocates want to do is take 2004 data and extrapolate that information into 2005 predictions--and that is simply not a valid means to assess how strong or how weak a season looks for a QB.
excellent point . . .
 
You're up first actually. You need to explain why player projections are reliable -- i.e., you think Manning's going to pass for lots of yards this year -- but team projections are unreliable -- i.e., you don't think Tampa Bay's Defense is going to be any good this year.
Well...I guess I would argue that player projections, but for a handful of guys, are not very reliable. It's my contention that with all the changes that teams go through each year on both sides of the line, the idea of SOS--which is predicated on what happened LAST YEAR--is useless. But, it doesn't matter what I think. What matters is what the data show. I came into this with some skepticism, but mostly ambivalence. I come out totally unconvinced. But, since we all follow the SOS bible here, I'd like to see some data that suggest why it is a viable and valid measure.

(Let's work just with QBs to keep some focus.)
But if you don't care about SOS because it's unreliable -- and you don't care about player projections because they're not reliable -- I assume you'd be equally happy with Eli Manning, Philip Rivers or Chad Pennington as your starting QB?
:confused: I suspect we're talking past each other.

There are any number of variables involved in projecting outcome. Individual talent, team talent, team concept, chemistry, etc. And, some would say SOS adds a slice in our little Venn diagram.

I'd say on the surface that makes sense. But, I haven't seen any data to support it. You guys must have some analyses you've run to validate it as a successful measure. That's all I'm asking anyone to come up with.

The problem with SOS is that you're predicting 32 teams across 32 schedules, all of whom have undergone numerous changes with players who will interact differently within the new system (changing chemistry, changing concept, changing talent, and ultimately changing outcome from the previous year). My suspicion is that this dynamic makes the predictive value of SOS meaningless. My own analyses support my suspicion. I'm asking if anyone does have good, hard data (beyond good theory) to support with an R^2 or Beta weight or something how good it is at elevating the QB position in some cases, while pulling it down in others.
The correlation coefficient between defenses ranked by 2006 FPs allowed to QBs and Chase's 2005 ranking of the defenses in the QBBC article was 0.336. No, it's not great -- but it's not meaningless either. The slice that SOS adds is probably as important -- if not more -- than the slice that a QB's WRs adds. That's just a rough guess of course, but I'd imagine that the 11 players trying to hurt the QB affect the QB more than the 3 players trying to help him.
So, how did you rank the defenses?
A combination of four factors, which I've recently been told is probably more complicated than it's worth. The basic thing to do is get an average of team ranks in two categories: FP allowed and QB rating allowed. Then you go and adjust based on obvious stuff. Did Ray Lewis miss 8 games? Did Patrick Surtain sign with the team? Was Julius Peppers drafted by the team? Move up and down.One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.
Ok, I'm with you now. One of the things you mentioned that I liked was using FP instead of a single metric (e.g., Pa_yds or TDs). I ran a few things in SPSS with FP and came up with some interesting data that lend support to some reliability from year-to-year.

But, I'm not sure if this answers the question about whether it's a good predictor when all the data are shuffled into a 16-game season.
Well it's not a good predictor if you're expecting it to say "tough predicted fantasy schedule = bad season for QB X." I think it's a decent predictor -- and absolutely better than a roll of the die -- if you're expecting it to say "tough predicted fantasy schedule = tough schedule for QB X." That means QBX deserves some sort of downgrade.
Agreed Chase.If I'm looking at two players who appear dead even to me, in terms of talent, surrounding talent and injury risk, I'll look to SOS to break the tie. I'm not going to put much stock into SOS, but in the rare (maybe twice in a draft) event of a dead even tie, I'll use SOS.

And, all I'm trying to say--what's characterized in the data--is that I'll flip a coin to break that tie, and I'll have equal success at picking the better teams as you will using SOS. I think you, along with most folks, believe this is a good strategy. It sounds good in theory to me, too. But, the theory doesn't map on to reality. Not in the way we would like it to.
 
I'm a little bit tied up at the moment -- possibly coming through with some really good stuff for the QBBC article -- so you probably shouldn't wait around for a response here cobalt. I'll post before I go to bed though.

 
I'm a little bit tied up at the moment -- possibly coming through with some really good stuff for the QBBC article -- so you probably shouldn't wait around for a response here cobalt. I'll post before I go to bed though.
Yeah, I'm calling it a night. But, you've got me on a crusade now, Chase.STOP SOS!STOP SOS!THE SOS LIES!STOP SOS! :D
 
Well it's not a good predictor if you're expecting it to say "tough predicted fantasy schedule = bad season for QB X." I think it's a decent predictor -- and absolutely better than a roll of the die -- if you're expecting it to say "tough predicted fantasy schedule = tough schedule for QB X." That means QBX deserves some sort of downgrade.
Agreed Chase.If I'm looking at two players who appear dead even to me, in terms of talent, surrounding talent and injury risk, I'll look to SOS to break the tie. I'm not going to put much stock into SOS, but in the rare (maybe twice in a draft) event of a dead even tie, I'll use SOS.

And, all I'm trying to say--what's characterized in the data--is that I'll flip a coin to break that tie, and I'll have equal success at picking the better teams as you will using SOS. I think you, along with most folks, believe this is a good strategy. It sounds good in theory to me, too. But, the theory doesn't map on to reality. Not in the way we would like it to.
Chase (and others), I want to believe in SoS, but at most I use it as a tiebreaker as LastDispatch said. All the studies in the world -- although certainly valid mathematically -- don't matter as much to me as to my own experiences in playing FF when it comes to SoS. So much of the real game is about individual matchups that even if one defense is stronger against the pass, certain receivers (and QBs, etc.) may excel against the defense due to the matchup of the offensive style/system vs. the defensive style/system. I look no further than my Steelers for empirical confirmation. For years, I have started my FF players against the Steelers, especially if they are WR1 on their team, because the Steelers historically tend to double-team less and worry more about what they are doing than what the offense is doing. At least that's how I've always seen it, and it's worked for me more often than not. Even when the Steelers had a very good pass defense, Jimmy Smith had several monster games against the Steelers based on this strategic matchup. Think of Marvin Harrison's 80-yard catch to open up the MNF game last season as exhibit #2.
 
Last edited by a moderator:
One thing I haven't done yet -- mostly because I just thought of it recently, and it sounds even nerdy for me - is to use the two stats above (FPA and QBRA) and normalize them based on schedules from the previous season. That would likely add a not insigificant amount of reliability into the data. It's on my to-askDrinen list.
Done to here.I ran through what was a lot of complex work for not terribly fascinating results. The correlation coefficient between FPs allowed by each defense to QBs and the adjusted FPs allowed by each defense to QBs was 0.94. That's pretty high. The biggest movers were Detroit (9th best defense to 18th best defense) and Green Bay (13th best defense to 21st best defense); on the other side of the ledger, San Diego (25th to 19th), Oakland (16/11) and Dallas (10/5) were the big movers.
 
I guess our definitions of usefulness just vary a bit. If I've projected Peyton Manning at 320 points, and find out he now has a really easy schedule, maybe I'll bump it to 335 points. That's pretty valuable to me.

 
I guess our definitions of usefulness just vary a bit. If I've projected Peyton Manning at 320 points, and find out he now has a really easy schedule, maybe I'll bump it to 335 points. That's pretty valuable to me.
And, I'm challenging your ability to say, prospectively, that he has an "easy" schedule. Based on what? Can you put your ability to a test? How did you choose 15 points? Is that a guesstimate, or is that based on something that tells you a 4% increase is warranted. The problem here is that we have two competing systems that we use: our gut and data. Our gut tells us that we should pay attention to schedule strength. The data suggest that, for the majority of teams, predicting schedule strength is unreliable and has a small effect on outcome. Footballguys has advanced a number of theories and tools to remove us from the "gut" call and go with the data (just look at VBD as the most obvious answer). Yet, when I hear you talk about bumping up Peyton Manning 15 points based on a really easy schedule, it begs the question "why 15 points" and "how did you determine it was easy?"

If there is some effect of schedule (and there is if we look retrospectively) that can be predicted , then there has to be some way to characterize it through the data. I have not seen that pulled off, yet. Someone said that Drinen did this a while ago, I have yet to see the link. But, if SOS is defensible, there should be a way to articulate that defense through some data. That's all I'm saying.

 
Last edited by a moderator:
SOS should be viewed more as a guideline for tweaking your expectations slightly, not being the primary determinant in who you select among similar-looking players.
And, while I would probably still do this myself...I think it's ultimately fool's gold.
Now you are arguing too far the other direction. Just because something might NOT ALWAYS be accurate doesn't mean it is NEVER accurate. You make decisions based on all of the information available to you. The SOS might not be the most accurate tool for drafting, but that doesn't mean that most of the time it will not give you the best player choice.
 
Last edited by a moderator:
I have read most of the posts in this thread and I fall in between, but would ultimately defend SOS. I wrote a freelance article that I hope will post soon on the FBG site concerning the utilization of SOS. When it posts I will provide the link.

In my article, I basically talk about how preseason SOS "can be" very inaccurate and I site instances where defenses that were considered to be top five at STOPPING the rush/pass preseason actually ended the year as being in the top five defenses at ALLOWING the rush/pass. I made this point because in the FBG magazine Clayton Gray talks about utilizing SOS during the draft to get off to a quick start in your league. (As an aside here, since Clayton's article in FBG where he talks about LT2 having one of the top five easiest RB schedules in the first five games, if you look at the SOS now it actually shows LT2 as having one of the hardest schedules the first five games). But this does not mean that the preseason SOS is COMPLETELY unreliable. I do not have it quantified as to how accurate it is, but let's say for argument's sake it is 51% accurate. Do I want to bet my life on it - NO! But do I want to use it as a useful tool to help me choose between two seemingly equal players - YES! Just because something isn't completely accurate does not mean that it is not useful. You go with the odds and what you know.

I also talk about how SOS is most useful (to me) around mid-season. By this time the SOS has stabilized (and rightfully so as just from an observational standpoint you can tell which DEF's are good and which are bad). At this point, you can start targeting guys to acquire who may have had tough schedules so far and not put up the best numbers and guys who have put up good numbers that might be getting ready to tank because of bad upcoming matchups. The gold here is (around midseason) you can begin targeting players for the playoffs by looking at SOS and acquiring these players for your team before others have figured this out.

Bottom line. SOS is least accurate preseason and by midseason it is very accurate. But it is useful in BOTH cases because it is based on the information you have at the time. Some information is better than no information. Nobody says use SOS alone. It is merely another tool in the tool belt.

 
Last edited by a moderator:
Let me take two steps back: In reality, the strength of a season schedule has ~11% effect on performance (just ran a few quick analyses), which is very good and very significant. But...this is when you analyze it with 20/20 hindsight.
This makes a lot of sense to me.But the question is, how much of that 11% is predictable in advance?

Correct me if I'm wrong, but I think you had an estimate of 2% earlier, which would imply that the effect is between one-fifth and one-sixth predictable. I would expect that a very good analyst might be able to predict about half of the actual effect, or about 5%.

Let's see where this 2% to 5% range gets us.

FBG projects WR#18 (aka an average WR2) to score 145.6 fantasy points. A range of +/- 2% would adjust this total by 2.9 points. Based on these projections, 148.5 points would be enough for WR#17, and 142.7 points would drop him to WR#19.

A range of +/- 5% would adjust this total by 7.3 points. A total of 152.9 points would be WR#15, and a total of 138.3 points would be WR#21.

So what we're saying is that predicted SOS could move a player to the top or bottom of a group of three to seven players.

In my opinion, this nicely simulates the kind of decisions that are made at a single draft pick. We are all repeatedly faced with decisions among a handful of players in the same tier or bucket. What this thought experiment tells us is that SOS is good enough to affect your rankings of a small number (at 2%) or a larger number (at 5%) of closely-ranked players. It doesn't take any more than a 2% effect to make a difference in how you draft.

 
SOS should be viewed more as a guideline for tweaking your expectations slightly, not being the primary determinant in who you select among similar-looking players.
And, while I would probably still do this myself...I think it's ultimately fool's gold.
Now you are arguing too far the other direction. Just because something might NOT ALWAYS be accurate doesn't mean it is NEVER accurate. You make decisions based on all of the information available to you. The SOS might not be the most accurate tool for drafting, but that doesn't mean that most of the time it will not give you the best player choice.
But, that's exactly what I'm saying and exactly what the data seem to show. A coin flip is accurate some of the time, just as SOS is. But, I believe it's by chance that this is the case--not because we can derive anything substantial or meaningful from it. It will not give you the best player choice. You can use your SOS, and I can flip a coin, and I will have an equal amount of successes and failures as you will, statistically.My issue is that, while it sounds good, I don't see any data to suggest that using last year as a gauge (which is the predominant feature of SOS) can be pooled into a future season to predict...well...anything. Nobody can (or has) provide(d) data to validate it, and the analyses I've run from 2002-2005 suggest that it is unreliable. AND, moreover, even if it were reliable, the effect is meaningless (on the order of 1-2%).

 
I'll make an example using my rearview SOS article. The league average QB's schedule -- and thus the league average QB scored -- 16.3 FP/G. (This is done by taking every FP scored by all QBs and dividing by 32 and then dividing by 16.). Let's take two guys at the extremes.

Brett Favre played 15.6 games and averaged 16.1 FP/G. His schedule was hard, and we would have expected Joe Average to score 15.1 FP/G based on such a schedule. (This is stuff straight from the article). So he's got a value added of 1.0 FP/G to his schedule. Assuming he had a league average schedule, then, he would have scored an extra 1.2 FP/G, since his schedule was 1.2 FP/G tougher than average. This boosts Favre up from 16.1 FP/G to 17.3 FP/G.

An additional 1.2 FP/G is significant, although Favre is one of the guys at the extremes. He would have moved from 24th to 16th (those rankings are in FP/G).

That 1.2 FP/G is worth 19 FPs for the season. If you've got a very hard schedule (or a very easy schedule) that could be worth 25 FPs for the seasons. Now you've got to discount that a bit by the uncertainty of predicting such a difficult schedule. Some might give just a negligible bump up; others might give the full 25.

 
Let me take two steps back: In reality, the strength of a season schedule has ~11% effect on performance (just ran a few quick analyses), which is very good and very significant. But...this is when you analyze it with 20/20 hindsight.
This makes a lot of sense to me.But the question is, how much of that 11% is predictable in advance?

Correct me if I'm wrong, but I think you had an estimate of 2% earlier, which would imply that the effect is between one-fifth and one-sixth predictable. I would expect that a very good analyst might be able to predict about half of the actual effect, or about 5%.

Let's see where this 2% to 5% range gets us.

FBG projects WR#18 (aka an average WR2) to score 145.6 fantasy points. A range of +/- 2% would adjust this total by 2.9 points. Based on these projections, 148.5 points would be enough for WR#17, and 142.7 points would drop him to WR#19.

A range of +/- 5% would adjust this total by 7.3 points. A total of 152.9 points would be WR#15, and a total of 138.3 points would be WR#21.

So what we're saying is that predicted SOS could move a player to the top or bottom of a group of three to seven players.

In my opinion, this nicely simulates the kind of decisions that are made at a single draft pick. We are all repeatedly faced with decisions among a handful of players in the same tier or bucket. What this thought experiment tells us is that SOS is good enough to affect your rankings of a small number (at 2%) or a larger number (at 5%) of closely-ranked players. It doesn't take any more than a 2% effect to make a difference in how you draft.
Well put.I think 2% of an effect is the high end (I've run some other analyses where they've shown off 1.1 and 1.4%). But, that's fine. If a 2-3 point difference is all you're looking for, then SOS will work for you. I still question its reliability, and I'll have to run some more tests. But, assuming it is reliable, if you're looking for just that point or two, then it's there to use.

I think it's important to keep its limitations in mind (and your note is very good at describing this). I'd like some other folks to chime in on their own ways of looking at it, though. It has great appeal to me, and I don't want to abandon it. I just think that our traditional ways of measuring SOS doesn't capitalize on what I think it should, which ultimately is about predicting behavior.

 
I'll make an example using my rearview SOS article. The league average QB's schedule -- and thus the league average QB scored -- 16.3 FP/G. (This is done by taking every FP scored by all QBs and dividing by 32 and then dividing by 16.). Let's take two guys at the extremes.

Brett Favre played 15.6 games and averaged 16.1 FP/G. His schedule was hard, and we would have expected Joe Average to score 15.1 FP/G based on such a schedule. (This is stuff straight from the article). So he's got a value added of 1.0 FP/G to his schedule. Assuming he had a league average schedule, then, he would have scored an extra 1.2 FP/G, since his schedule was 1.2 FP/G tougher than average. This boosts Favre up from 16.1 FP/G to 17.3 FP/G.

An additional 1.2 FP/G is significant, although Favre is one of the guys at the extremes. He would have moved from 24th to 16th (those rankings are in FP/G).

That 1.2 FP/G is worth 19 FPs for the season. If you've got a very hard schedule (or a very easy schedule) that could be worth 25 FPs for the seasons. Now you've got to discount that a bit by the uncertainty of predicting such a difficult schedule. Some might give just a negligible bump up; others might give the full 25.
Chase, I'm not disputing that if you look at it retrospectively that there is an effect. I think I've said before that it makes about an 11% difference across the board what your schedule was like.My question is, though, could you have PREDICTED that his schedule was going to be hard? Is this something you can do prospectively? And, even were you able to do so in this one instance, could you have predicted successfully in more than 50% of the cases (17 of 32) who would perform above or below expectations based on their schedules?

I think ultimately we're still talking about two different things. SOS matters! I agree with you. No doubt. It accounts for about 11% of the variance. But, we have to distinguish between preseason SOS and postseason SOS. They are very different. What you seem to be arguing for is supported by retrospective analyses--not prospective. And, ad hoc analyses are great. But, what I am challenging--and what I would ultimately like improvements on--are the preSOS data, because I don't think they mean much across teams and across schedules.

 
Last edited by a moderator:
SOS should be viewed more as a guideline for tweaking your expectations slightly, not being the primary determinant in who you select among similar-looking players.
And, while I would probably still do this myself...I think it's ultimately fool's gold.
Now you are arguing too far the other direction. Just because something might NOT ALWAYS be accurate doesn't mean it is NEVER accurate. You make decisions based on all of the information available to you. The SOS might not be the most accurate tool for drafting, but that doesn't mean that most of the time it will not give you the best player choice.
But, that's exactly what I'm saying and exactly what the data seem to show. A coin flip is accurate some of the time, just as SOS is. But, I believe it's by chance that this is the case--not because we can derive anything substantial or meaningful from it. It will not give you the best player choice. You can use your SOS, and I can flip a coin, and I will have an equal amount of successes and failures as you will, statistically.My issue is that, while it sounds good, I don't see any data to suggest that using last year as a gauge (which is the predominant feature of SOS) can be pooled into a future season to predict...well...anything. Nobody can (or has) provide(d) data to validate it, and the analyses I've run from 2002-2005 suggest that it is unreliable. AND, moreover, even if it were reliable, the effect is meaningless (on the order of 1-2%).
Yes, but I stated "most of the time" so I do not believe it is a "toss up" situation, I really feel like the percentages are much better than "a toss up" for preseason SOS. Again, I DO believe that SOS is in its weakest state "preseason" - it has to be by its very nature (but it is still beneficial - even if you only believe it can help you to squeeze out that last drop of trying to gain an upper hand).The question I have is - "Is SOS figured into player projections to some extent and thus potentially double penalizing a player if you make the decision between two players based on SOS?" In other words, if I see Reggie Wayne and Roy Williams both projected for 155 points and RW has the weaker schedule then I might opt for RW over Wayne, but has Wayne's schedule already been taken into account with his projections placing him at the low end of his projections range whereas RW might be at the top of his projected range because of his schedule already being figured into his projections? So should I have really opted for Wayne since he might have more upside? This is what I am not sure about and the only caveat to possibly NOT choosing between two players based on a last tie breaker of SOS. I have to believe that a player's projections already take into account his schedule to some degree.

 
Last edited by a moderator:
SOS should be viewed more as a guideline for tweaking your expectations slightly, not being the primary determinant in who you select among similar-looking players.
And, while I would probably still do this myself...I think it's ultimately fool's gold.
Now you are arguing too far the other direction. Just because something might NOT ALWAYS be accurate doesn't mean it is NEVER accurate. You make decisions based on all of the information available to you. The SOS might not be the most accurate tool for drafting, but that doesn't mean that most of the time it will not give you the best player choice.
But, that's exactly what I'm saying and exactly what the data seem to show. A coin flip is accurate some of the time, just as SOS is. But, I believe it's by chance that this is the case--not because we can derive anything substantial or meaningful from it. It will not give you the best player choice. You can use your SOS, and I can flip a coin, and I will have an equal amount of successes and failures as you will, statistically.My issue is that, while it sounds good, I don't see any data to suggest that using last year as a gauge (which is the predominant feature of SOS) can be pooled into a future season to predict...well...anything. Nobody can (or has) provide(d) data to validate it, and the analyses I've run from 2002-2005 suggest that it is unreliable. AND, moreover, even if it were reliable, the effect is meaningless (on the order of 1-2%).
Yes, but I stated "most of the time" so I do not believe it is a "toss up" situation, I really feel like the percentages are much better than "a toss up" for preseason SOS. Again, I DO believe that SOS is in its weakest state "preseason" - it has to be by its very nature (but it is still beneficial - even if you only believe it can help you to squeeze out that last drop of trying to gain an upper hand).
Yeah, I'd like to feel that way too. The problem is the data don't support it. Is there any statistical way that you can think of that can help prop this up as something more than a feeling?
 
I don't know what else to say here. If you would honestly rather flip a coin to decide what player to take rather than consider the defenses their playing than that's up to you. It's your team.

 
I don't know what else to say here. If you would honestly rather flip a coin to decide what player to take rather than consider the defenses their playing than that's up to you. It's your team.
Flip response aside, I assume then that you have some support then to demonstrate that SOS is a valid predictor.
 
I dont have the time or patience to run through all the numbers. But when it comes down to it, when players are dead locked and my options are flipping a coin or considering their schedules, I'll take a look at their schedules.

 

Users who are viewing this thread

Back
Top