bobspruill
Breathe deeply
DISCLAIMERS:
This is not meant as criticism of FBG at all. I've used their projections for 4 years now with good success. The purpose here is to assess risks in depending upon FBG projections by determining where they are strong and where they are weak. This analysis only concerns 2005 projections, and as such the sample sizes are necessarily small. The comparisons made here are to the alternative method of using prior year's performance; this is not a competitive analysis of FBG to other sites' projections.
This is a companion piece to the RB analysis posted here. You might want to look at it before reading more.
CONCLUSIONS:
As one would expect given the increased volatility of receiver performance and the larger pool of players involved, these numbers proved more difficult to predict than RB numbers.
(1) For receivers ranked in the range 1-20, both FBG projections and prior year's performance were essentially useless predictors of 2005 rank.
(2) Receivers ranked in the range 21-60 by FBG represented both less downside risk and more upside potential than receivers ranked in the same range by prior year's performance.
(3) Receivers predicted by FBG to perform significantly better than the previous year often did so when their projected ranking was below 15, but did not when they were projected to rank in the top 15.
(4) As a practical matter, some method of tempering projected differences in production among the top 20 receivers should be found; within this group it at least appears that differences in predicted value do not translate into differences in actual value.
METHODS & DETAILED FINDINGS:
For the purposes of this analysis, WR's and TE's were considered togethher. The top 60 overall scorers in the two positions taken together were considered, of which 12 were TE's. Only fantasy scoring for weeks 1-16 of the NFL season was used. The scoring system was as follows:
1pt/5 yds rushing + 1pt/5 yds receiving + 1 PPR + 1pt/10 yds passing + 6 pts per TD passing or rushing or receiving - 3 pts per interception or fumble
As with the RB analysis, two basic comparisons were made. The first examined the relationship between the predicted and actual 2005 ranks.
Among those predicted by FBG to be in the top 60, the correlation coefficient between predicted rank and actual rank was 0.21. This is conventionally interpreted to mean that FBG projections accounted for 4% of the variance in the actual ranking of the players identified by FBG as falling in the top 60.
Among the top 60 scorers from 2004, the correlation coefficient between that rank and their 2005 rank was 0.30, which is usually interpreted to mean that last year's rank accounted for 9% of the variability in this year's rank.
It ought to be noted that this method of estimating correlation within a subsample focuses primarily, although not entirely, on the relative rank of those chosen. That is, these correlation coefficients are mostly useful for determining the degree to which top-60 receiver A being ranked above top-60 receiver B predicts receiver A's actual rank being higher than receiver B's actual rank.
Some comparison with FBG's RB projections is probably warranted. Among the top 40 RB's, FBG projections had a correlation coefficient of 0.55 with actual rank; for the prior year's top 40 RB's, the correlation of 2004 rank with 2005 rank was 0.58.
If we restrict the receiver data sets to the top 40, we get some interesting--some might say disturbing--results. For prior year's performance in this data set, the correlation coefficient was 0.00. This means that, on average, relative ranking within the previous year's top 40 had essentially no predictive value for determining relative rank the following year. For FBG projections, the analogous coefficient was -0.18. This negative correlation suggests that, to some very small degree, being ranked near the bottom of the top 40 by FBG was actually preferable to being ranked near the top!
To illustrate these findings, I've pasted in links to graphical representations of predicted versus actual rank. As with the RB graphs, these have a red line to indicate the graph of a perfect ranking. Cases where the blue line (predicted rank) is above the "perfect" line are instances in which a receiver was overvalued; where the blue line is below "perfect," the receiver was undervalued.
FBG predicted rank versus actual rank
prior year's rank versus actual rank
What we see here is a year in which no method of predicting relative rank among the top 60 was particularly successful.
If we simply consider the direction of the errors, this becomes quite apparent. Among the top 10, FBG overestimated rank in 9 of those cases and nailed 1 exactly; among the previous year's top 10, 7 declined in rank, 2 rose, and 1 stayed the same. Among the top 10, previous year's performance would have been a better predictor than FBG rank this year, but by no means a good one.
Among the top 20 receivers, FBG overestimated the rank of 15 and underestimated only 4; prior year's performance would have overestimated 14 and underestimated only 4. Over this area of the rankings, the two performed essentially the same.
Over the full top 60, FBG overestimated the rank of only 34 and underestimated the rank of 25; using prior year's performance, you would have overestimated 38 and underestimated only 19. Thus, both the downside risk and the upside potential on ranks 21-60 under FBG projections were both preferable to using prior year's rank. However, on 1-20 neither method of prediction functioned at all well.
Moss and Owens have probably gotten the lion's share of the attention as underperformers, but in reality this was a bad year for the entire top 20. This is more or less what would be expected among receivers, where volatility is high. What we're seeing is, in some sense at least, regression toward the mean. Although this might have seemed an unusually bad year for receivers at the top, it seems reasonable to expect this feature of the top 20 receivers overall to persist from year to year.
With this in mind, it becomes important to investigate the consequential points differences associated with these ranking errors. Given that you are virtually assured not to be getting WR10 when you draft the player at 10 on your list of receivers, it's important to know how to minimize downside risk / maximize upside potential, if possible.
As with the RB analysis, consequential points difference is measured in the following way: your projection shows that, for example, Anquan Boldin is WR17; in actual fact, TJ Houshmandazeh is WR17. Boldin scored 400 points, while TJ scored 313. Thus the consequential points difference in picking Boldin instead of TJ (the receiver you thought you were getting) is +87. This number is scaled by the baseline production of WR60 (Travis Taylor: 166) to give a percentage points difference of +52%.
Here, using this method, are graphical representations of the consequential points differences of the top 60 receivers under each projection.
FBG consequential difference in ranks
Prior year's consequential difference in ranks
In the case of the FBG projections, the RMS points difference over the whole top 60 was 65%. For prior year's rank, it was 71%--a wash, basically.
Over the top 10, interestingly, it was 88% for FBG and 110% for prior year. Nearly all of these errors were overestimations. In short, there seems to have been slightly less downside risk in the FBG top 10 than there was in the prior year's.
Over the top 20, the rms consequential points difference was 94% for FBG and 91% for the prior year. This seems rather surprising, given that only 13 of these are the same players, and that they are obviously not all ranked the same in both lists. This seems to suggest that a way might exist of using both sets of rankings to improve upon either, although how is always the rub.
In cases where FBG predicted a top-20 receiver that had not been top-20 the year before, FBG was correct in 3 cases (projections: Boldin #17, Fitz # 18, Steve Smith #20) and was incorrect in 4 (R. Moss #1, Ward #10, Burleson #13, Roy Williams #16). This reproduces a pattern seen in the RB rankings: FBG predictions of a player having a substantially better year than the one before tended to be good below 15 but not good in the 1-15 range.
To move beyond averages a bit, the relative size and frequency of points differences seemed a good thing to investigate. Consequential points differences were subdivided into four categories: negative and beyond the rms difference, negative but within the rms difference, positive and within the rms difference, positive and greater than the rms difference. These categories correspond to judgments of whether a player performed considerably worse, marginally worse, marginally better, or considerably better than expected.
Over the full top 60, FBG rankings performed significantly better by this measure than prior year's performance. For the FBG rankings, 18% did considerably worse than projected, 38% somewhat worse, 33% somewhat better, and 10% considerably better. The equivalent numbers for prior year's rank were 28% considerably worse, 33% somewhat worse, 33% somewhat better, and only 5% considerably better. As with RB projections, prior year's performance seems better used as a way of tempering optimism than as a source of projections in itself.
Over the all-important top 20, a comparison yields conflicting results. Although on average the FBG projected top 20 represented less downside risk, nevertheless 15 performed worse than expected (9 considerably, 6 somewhat); in the case of prior year's rank, it was 14 (10 considerably, 4 somewhat). On the other hand, 6 of last year's top 20 performed somewhat better, whereas only 3 of FBG's projected top 20 did. Just to top the whole confusing sundae off with a nice cherry, of course FBG gave more extreme upside as well: 2 of the FBG top 20 performed considerably better than expected, whereas none of last year's top 20 did.
It seems apparent that those who are in the business of predicting receiver performance for the following year are in a difficult line of work. Given that fact and the problems of relative scarcity, it hardly seems surprising that so many fantasy players draft RB's in the first two rounds. In virtually any sense you choose to consider it, the risk/reward balance is more favorable for early-round RB's than for early-round WR's....
But this isn't a thread about draft strategy per se. Obviously, though, it has implications in that direction.
This is not meant as criticism of FBG at all. I've used their projections for 4 years now with good success. The purpose here is to assess risks in depending upon FBG projections by determining where they are strong and where they are weak. This analysis only concerns 2005 projections, and as such the sample sizes are necessarily small. The comparisons made here are to the alternative method of using prior year's performance; this is not a competitive analysis of FBG to other sites' projections.
This is a companion piece to the RB analysis posted here. You might want to look at it before reading more.
CONCLUSIONS:
As one would expect given the increased volatility of receiver performance and the larger pool of players involved, these numbers proved more difficult to predict than RB numbers.
(1) For receivers ranked in the range 1-20, both FBG projections and prior year's performance were essentially useless predictors of 2005 rank.
(2) Receivers ranked in the range 21-60 by FBG represented both less downside risk and more upside potential than receivers ranked in the same range by prior year's performance.
(3) Receivers predicted by FBG to perform significantly better than the previous year often did so when their projected ranking was below 15, but did not when they were projected to rank in the top 15.
(4) As a practical matter, some method of tempering projected differences in production among the top 20 receivers should be found; within this group it at least appears that differences in predicted value do not translate into differences in actual value.
METHODS & DETAILED FINDINGS:
For the purposes of this analysis, WR's and TE's were considered togethher. The top 60 overall scorers in the two positions taken together were considered, of which 12 were TE's. Only fantasy scoring for weeks 1-16 of the NFL season was used. The scoring system was as follows:
1pt/5 yds rushing + 1pt/5 yds receiving + 1 PPR + 1pt/10 yds passing + 6 pts per TD passing or rushing or receiving - 3 pts per interception or fumble
As with the RB analysis, two basic comparisons were made. The first examined the relationship between the predicted and actual 2005 ranks.
Among those predicted by FBG to be in the top 60, the correlation coefficient between predicted rank and actual rank was 0.21. This is conventionally interpreted to mean that FBG projections accounted for 4% of the variance in the actual ranking of the players identified by FBG as falling in the top 60.
Among the top 60 scorers from 2004, the correlation coefficient between that rank and their 2005 rank was 0.30, which is usually interpreted to mean that last year's rank accounted for 9% of the variability in this year's rank.
It ought to be noted that this method of estimating correlation within a subsample focuses primarily, although not entirely, on the relative rank of those chosen. That is, these correlation coefficients are mostly useful for determining the degree to which top-60 receiver A being ranked above top-60 receiver B predicts receiver A's actual rank being higher than receiver B's actual rank.
Some comparison with FBG's RB projections is probably warranted. Among the top 40 RB's, FBG projections had a correlation coefficient of 0.55 with actual rank; for the prior year's top 40 RB's, the correlation of 2004 rank with 2005 rank was 0.58.
If we restrict the receiver data sets to the top 40, we get some interesting--some might say disturbing--results. For prior year's performance in this data set, the correlation coefficient was 0.00. This means that, on average, relative ranking within the previous year's top 40 had essentially no predictive value for determining relative rank the following year. For FBG projections, the analogous coefficient was -0.18. This negative correlation suggests that, to some very small degree, being ranked near the bottom of the top 40 by FBG was actually preferable to being ranked near the top!
To illustrate these findings, I've pasted in links to graphical representations of predicted versus actual rank. As with the RB graphs, these have a red line to indicate the graph of a perfect ranking. Cases where the blue line (predicted rank) is above the "perfect" line are instances in which a receiver was overvalued; where the blue line is below "perfect," the receiver was undervalued.
FBG predicted rank versus actual rank
prior year's rank versus actual rank
What we see here is a year in which no method of predicting relative rank among the top 60 was particularly successful.
If we simply consider the direction of the errors, this becomes quite apparent. Among the top 10, FBG overestimated rank in 9 of those cases and nailed 1 exactly; among the previous year's top 10, 7 declined in rank, 2 rose, and 1 stayed the same. Among the top 10, previous year's performance would have been a better predictor than FBG rank this year, but by no means a good one.
Among the top 20 receivers, FBG overestimated the rank of 15 and underestimated only 4; prior year's performance would have overestimated 14 and underestimated only 4. Over this area of the rankings, the two performed essentially the same.
Over the full top 60, FBG overestimated the rank of only 34 and underestimated the rank of 25; using prior year's performance, you would have overestimated 38 and underestimated only 19. Thus, both the downside risk and the upside potential on ranks 21-60 under FBG projections were both preferable to using prior year's rank. However, on 1-20 neither method of prediction functioned at all well.
Moss and Owens have probably gotten the lion's share of the attention as underperformers, but in reality this was a bad year for the entire top 20. This is more or less what would be expected among receivers, where volatility is high. What we're seeing is, in some sense at least, regression toward the mean. Although this might have seemed an unusually bad year for receivers at the top, it seems reasonable to expect this feature of the top 20 receivers overall to persist from year to year.
With this in mind, it becomes important to investigate the consequential points differences associated with these ranking errors. Given that you are virtually assured not to be getting WR10 when you draft the player at 10 on your list of receivers, it's important to know how to minimize downside risk / maximize upside potential, if possible.
As with the RB analysis, consequential points difference is measured in the following way: your projection shows that, for example, Anquan Boldin is WR17; in actual fact, TJ Houshmandazeh is WR17. Boldin scored 400 points, while TJ scored 313. Thus the consequential points difference in picking Boldin instead of TJ (the receiver you thought you were getting) is +87. This number is scaled by the baseline production of WR60 (Travis Taylor: 166) to give a percentage points difference of +52%.
Here, using this method, are graphical representations of the consequential points differences of the top 60 receivers under each projection.
FBG consequential difference in ranks
Prior year's consequential difference in ranks
In the case of the FBG projections, the RMS points difference over the whole top 60 was 65%. For prior year's rank, it was 71%--a wash, basically.
Over the top 10, interestingly, it was 88% for FBG and 110% for prior year. Nearly all of these errors were overestimations. In short, there seems to have been slightly less downside risk in the FBG top 10 than there was in the prior year's.
Over the top 20, the rms consequential points difference was 94% for FBG and 91% for the prior year. This seems rather surprising, given that only 13 of these are the same players, and that they are obviously not all ranked the same in both lists. This seems to suggest that a way might exist of using both sets of rankings to improve upon either, although how is always the rub.
In cases where FBG predicted a top-20 receiver that had not been top-20 the year before, FBG was correct in 3 cases (projections: Boldin #17, Fitz # 18, Steve Smith #20) and was incorrect in 4 (R. Moss #1, Ward #10, Burleson #13, Roy Williams #16). This reproduces a pattern seen in the RB rankings: FBG predictions of a player having a substantially better year than the one before tended to be good below 15 but not good in the 1-15 range.
To move beyond averages a bit, the relative size and frequency of points differences seemed a good thing to investigate. Consequential points differences were subdivided into four categories: negative and beyond the rms difference, negative but within the rms difference, positive and within the rms difference, positive and greater than the rms difference. These categories correspond to judgments of whether a player performed considerably worse, marginally worse, marginally better, or considerably better than expected.
Over the full top 60, FBG rankings performed significantly better by this measure than prior year's performance. For the FBG rankings, 18% did considerably worse than projected, 38% somewhat worse, 33% somewhat better, and 10% considerably better. The equivalent numbers for prior year's rank were 28% considerably worse, 33% somewhat worse, 33% somewhat better, and only 5% considerably better. As with RB projections, prior year's performance seems better used as a way of tempering optimism than as a source of projections in itself.
Over the all-important top 20, a comparison yields conflicting results. Although on average the FBG projected top 20 represented less downside risk, nevertheless 15 performed worse than expected (9 considerably, 6 somewhat); in the case of prior year's rank, it was 14 (10 considerably, 4 somewhat). On the other hand, 6 of last year's top 20 performed somewhat better, whereas only 3 of FBG's projected top 20 did. Just to top the whole confusing sundae off with a nice cherry, of course FBG gave more extreme upside as well: 2 of the FBG top 20 performed considerably better than expected, whereas none of last year's top 20 did.
It seems apparent that those who are in the business of predicting receiver performance for the following year are in a difficult line of work. Given that fact and the problems of relative scarcity, it hardly seems surprising that so many fantasy players draft RB's in the first two rounds. In virtually any sense you choose to consider it, the risk/reward balance is more favorable for early-round RB's than for early-round WR's....
But this isn't a thread about draft strategy per se. Obviously, though, it has implications in that direction.