What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Quality of FBG 2005 projections (1 Viewer)

bobspruill

Breathe deeply
DISCLAIMERS:

This is not meant as criticism of FBG at all. I've used their projections for 4 years now with good success. The purpose here is to assess risks in depending upon FBG projections by determining where they are strong and where they are weak. This analysis only concerns 2005 projections, and as such the sample sizes are necessarily small. The comparisons made here are to the alternative method of using prior year's performance; this is not a competitive analysis of FBG to other sites' projections.

This is a companion piece to the RB analysis posted here. You might want to look at it before reading more.

CONCLUSIONS:

As one would expect given the increased volatility of receiver performance and the larger pool of players involved, these numbers proved more difficult to predict than RB numbers.

(1) For receivers ranked in the range 1-20, both FBG projections and prior year's performance were essentially useless predictors of 2005 rank.

(2) Receivers ranked in the range 21-60 by FBG represented both less downside risk and more upside potential than receivers ranked in the same range by prior year's performance.

(3) Receivers predicted by FBG to perform significantly better than the previous year often did so when their projected ranking was below 15, but did not when they were projected to rank in the top 15.

(4) As a practical matter, some method of tempering projected differences in production among the top 20 receivers should be found; within this group it at least appears that differences in predicted value do not translate into differences in actual value.

METHODS & DETAILED FINDINGS:

For the purposes of this analysis, WR's and TE's were considered togethher. The top 60 overall scorers in the two positions taken together were considered, of which 12 were TE's. Only fantasy scoring for weeks 1-16 of the NFL season was used. The scoring system was as follows:

1pt/5 yds rushing + 1pt/5 yds receiving + 1 PPR + 1pt/10 yds passing + 6 pts per TD passing or rushing or receiving - 3 pts per interception or fumble

As with the RB analysis, two basic comparisons were made. The first examined the relationship between the predicted and actual 2005 ranks.

Among those predicted by FBG to be in the top 60, the correlation coefficient between predicted rank and actual rank was 0.21. This is conventionally interpreted to mean that FBG projections accounted for 4% of the variance in the actual ranking of the players identified by FBG as falling in the top 60.

Among the top 60 scorers from 2004, the correlation coefficient between that rank and their 2005 rank was 0.30, which is usually interpreted to mean that last year's rank accounted for 9% of the variability in this year's rank.

It ought to be noted that this method of estimating correlation within a subsample focuses primarily, although not entirely, on the relative rank of those chosen. That is, these correlation coefficients are mostly useful for determining the degree to which top-60 receiver A being ranked above top-60 receiver B predicts receiver A's actual rank being higher than receiver B's actual rank.

Some comparison with FBG's RB projections is probably warranted. Among the top 40 RB's, FBG projections had a correlation coefficient of 0.55 with actual rank; for the prior year's top 40 RB's, the correlation of 2004 rank with 2005 rank was 0.58.

If we restrict the receiver data sets to the top 40, we get some interesting--some might say disturbing--results. For prior year's performance in this data set, the correlation coefficient was 0.00. This means that, on average, relative ranking within the previous year's top 40 had essentially no predictive value for determining relative rank the following year. For FBG projections, the analogous coefficient was -0.18. This negative correlation suggests that, to some very small degree, being ranked near the bottom of the top 40 by FBG was actually preferable to being ranked near the top!

To illustrate these findings, I've pasted in links to graphical representations of predicted versus actual rank. As with the RB graphs, these have a red line to indicate the graph of a perfect ranking. Cases where the blue line (predicted rank) is above the "perfect" line are instances in which a receiver was overvalued; where the blue line is below "perfect," the receiver was undervalued.

FBG predicted rank versus actual rank

prior year's rank versus actual rank

What we see here is a year in which no method of predicting relative rank among the top 60 was particularly successful.

If we simply consider the direction of the errors, this becomes quite apparent. Among the top 10, FBG overestimated rank in 9 of those cases and nailed 1 exactly; among the previous year's top 10, 7 declined in rank, 2 rose, and 1 stayed the same. Among the top 10, previous year's performance would have been a better predictor than FBG rank this year, but by no means a good one.

Among the top 20 receivers, FBG overestimated the rank of 15 and underestimated only 4; prior year's performance would have overestimated 14 and underestimated only 4. Over this area of the rankings, the two performed essentially the same.

Over the full top 60, FBG overestimated the rank of only 34 and underestimated the rank of 25; using prior year's performance, you would have overestimated 38 and underestimated only 19. Thus, both the downside risk and the upside potential on ranks 21-60 under FBG projections were both preferable to using prior year's rank. However, on 1-20 neither method of prediction functioned at all well.

Moss and Owens have probably gotten the lion's share of the attention as underperformers, but in reality this was a bad year for the entire top 20. This is more or less what would be expected among receivers, where volatility is high. What we're seeing is, in some sense at least, regression toward the mean. Although this might have seemed an unusually bad year for receivers at the top, it seems reasonable to expect this feature of the top 20 receivers overall to persist from year to year.

With this in mind, it becomes important to investigate the consequential points differences associated with these ranking errors. Given that you are virtually assured not to be getting WR10 when you draft the player at 10 on your list of receivers, it's important to know how to minimize downside risk / maximize upside potential, if possible.

As with the RB analysis, consequential points difference is measured in the following way: your projection shows that, for example, Anquan Boldin is WR17; in actual fact, TJ Houshmandazeh is WR17. Boldin scored 400 points, while TJ scored 313. Thus the consequential points difference in picking Boldin instead of TJ (the receiver you thought you were getting) is +87. This number is scaled by the baseline production of WR60 (Travis Taylor: 166) to give a percentage points difference of +52%.

Here, using this method, are graphical representations of the consequential points differences of the top 60 receivers under each projection.

FBG consequential difference in ranks

Prior year's consequential difference in ranks

In the case of the FBG projections, the RMS points difference over the whole top 60 was 65%. For prior year's rank, it was 71%--a wash, basically.

Over the top 10, interestingly, it was 88% for FBG and 110% for prior year. Nearly all of these errors were overestimations. In short, there seems to have been slightly less downside risk in the FBG top 10 than there was in the prior year's.

Over the top 20, the rms consequential points difference was 94% for FBG and 91% for the prior year. This seems rather surprising, given that only 13 of these are the same players, and that they are obviously not all ranked the same in both lists. This seems to suggest that a way might exist of using both sets of rankings to improve upon either, although how is always the rub.

In cases where FBG predicted a top-20 receiver that had not been top-20 the year before, FBG was correct in 3 cases (projections: Boldin #17, Fitz # 18, Steve Smith #20) and was incorrect in 4 (R. Moss #1, Ward #10, Burleson #13, Roy Williams #16). This reproduces a pattern seen in the RB rankings: FBG predictions of a player having a substantially better year than the one before tended to be good below 15 but not good in the 1-15 range.

To move beyond averages a bit, the relative size and frequency of points differences seemed a good thing to investigate. Consequential points differences were subdivided into four categories: negative and beyond the rms difference, negative but within the rms difference, positive and within the rms difference, positive and greater than the rms difference. These categories correspond to judgments of whether a player performed considerably worse, marginally worse, marginally better, or considerably better than expected.

Over the full top 60, FBG rankings performed significantly better by this measure than prior year's performance. For the FBG rankings, 18% did considerably worse than projected, 38% somewhat worse, 33% somewhat better, and 10% considerably better. The equivalent numbers for prior year's rank were 28% considerably worse, 33% somewhat worse, 33% somewhat better, and only 5% considerably better. As with RB projections, prior year's performance seems better used as a way of tempering optimism than as a source of projections in itself.

Over the all-important top 20, a comparison yields conflicting results. Although on average the FBG projected top 20 represented less downside risk, nevertheless 15 performed worse than expected (9 considerably, 6 somewhat); in the case of prior year's rank, it was 14 (10 considerably, 4 somewhat). On the other hand, 6 of last year's top 20 performed somewhat better, whereas only 3 of FBG's projected top 20 did. Just to top the whole confusing sundae off with a nice cherry, of course FBG gave more extreme upside as well: 2 of the FBG top 20 performed considerably better than expected, whereas none of last year's top 20 did.

It seems apparent that those who are in the business of predicting receiver performance for the following year are in a difficult line of work. Given that fact and the problems of relative scarcity, it hardly seems surprising that so many fantasy players draft RB's in the first two rounds. In virtually any sense you choose to consider it, the risk/reward balance is more favorable for early-round RB's than for early-round WR's....

But this isn't a thread about draft strategy per se. Obviously, though, it has implications in that direction.

 
If 2005 were a basis for making decisions (which it may be), then the clear conclusion from this is not to spend high draft picks on WRs; rather several mid-round picks and hope you strike gold. Instead (with only RBs to work with at the moment), invest the high picks in RBs (and hope you strike gold, but at least you have a better chance of it).This follows my personal inclination - not because I've believed that picking consensus top 20 WRs to be top 20 is hard, but because picking top 60 WRs to be top 30 seems to be relatively easy (some of them will be available on waivers, whereas maybe one season-end top 30 RB will be on the waiver wire day one).Can't wait for the QB analysis. And am wondering what the TE analysis would look like separate from WRs. Seems to be that expectations vs. actual with TE is perhaps even more skewed than WRs (but that one generally knows that the top 3 or 4 will be in the top 6 or 7; it's the guys out of nowhere that tempt one to pick sleepers).

 
If 2005 were a basis for making decisions (which it may be), then the clear conclusion from this is not to spend high draft picks on WRs; rather several mid-round picks and hope you strike gold. Instead (with only RBs to work with at the moment), invest the high picks in RBs (and hope you strike gold, but at least you have a better chance of it).

This follows my personal inclination - not because I've believed that picking consensus top 20 WRs to be top 20 is hard, but because picking top 60 WRs to be top 30 seems to be relatively easy (some of them will be available on waivers, whereas maybe one season-end top 30 RB will be on the waiver wire day one).

Can't wait for the QB analysis. And am wondering what the TE analysis would look like separate from WRs. Seems to be that expectations vs. actual with TE is perhaps even more skewed than WRs (but that one generally knows that the top 3 or 4 will be in the top 6 or 7; it's the guys out of nowhere that tempt one to pick sleepers).
Another conclusion you might draw from this result is that the two best draft strategies are to go heavily toward RB's or heavily toward WR's but avoid the middle ground. The case for drafting RB's heavily is pretty obvious.If you draft, say, 3 WR's in the top 20 on your draft list, then you have a very good chance of landing 1 who's actually up there, and a fairly good chance of landing 2. Assuming you can get 1 later-round gem at RB (not inconceivable, given how FBG rankings seem to perform in the range 15-30), 2 top-flight WR's is a big advantage when your competition mostly has 0 or 1.

I'll do QB's before I think about doing TE's alone.

 
Thanks, Bob.It seems to me that you have a large investment in WRs if you can get three Top 20 preseason. I don't see how one could afford that without denuding oneself of RBs. And buying 3 to get 1.5 is awfully expensive in and of itself. It seems far more likely that you could buy 3 to get 2.0 RBs, which is a better investment, plus - since if top RBs turn out to be worthless, it's almost always because of injury* - if you handcuff your top RBs, you tend to be able to replace some of that production.* WR failure seems related not only to injury, but just not performing (I suppose you've introduced another player with a WR, namely the QB). And grabbing the WR's backup tends not to work as a handcuff (probably, again, because of the QB issue).Anyway, not meaning to debate you, especially since you're staring at the numbers and I'm just looking at your tiered results. I very much welcome the information!

 
Thanks, Bob.

It seems to me that you have a large investment in WRs if you can get three Top 20 preseason. I don't see how one could afford that without denuding oneself of RBs. And buying 3 to get 1.5 is awfully expensive in and of itself. It seems far more likely that you could buy 3 to get 2.0 RBs, which is a better investment, plus - since if top RBs turn out to be worthless, it's almost always because of injury* - if you handcuff your top RBs, you tend to be able to replace some of that production.

* WR failure seems related not only to injury, but just not performing (I suppose you've introduced another player with a WR, namely the QB). And grabbing the WR's backup tends not to work as a handcuff (probably, again, because of the QB issue).

Anyway, not meaning to debate you, especially since you're staring at the numbers and I'm just looking at your tiered results. I very much welcome the information!
You're right that injury doesn't seem to be the most important variable with WR, although it's not insignificant. With a few notable exceptions (Boldin & Holt), the top WR's in 2005 played virtually every game for their teams. There was a bit of a handcuff effect (Bryant Johnson, Kevin Curtis), but the points for an injured WR tend to get spread around to several different players or go up in smoke completely, whereas for RB they tend to land predominantly on one player.But, re: the draft strategy stuff, the "buy 3 WR to get 2" idea I flung out there was more a hypothesis than anything else. Even some seemingly very simple questions related to drafting turn out to be much less simple to study. Some of these are on my long list of things to look into before next season.

 
You're right that injury doesn't seem to be the most important variable with WR, although it's not insignificant. With a few notable exceptions (Boldin & Holt), the top WR's in 2005 played virtually every game for their teams. There was a bit of a handcuff effect (Bryant Johnson, Kevin Curtis), but the points for an injured WR tend to get spread around to several different players or go up in smoke completely, whereas for RB they tend to land predominantly on one player.
Aren't you forgetting one Randy Moss? Or don't you believe his injury impacted his performance, even though he "played" most of the games?
 
Last edited by a moderator:
Nice work on this! In reading your analysis I am reminded of something that's not even addressed. It has to do with the players outside the top tier and with drafting. It's true that things change at the top. They also change at the bottom and in the middle. Because of that you have to be prepared for drafting players that will put you in a position to gain from the flucuations in player performance.Nothing has really changed in drafting strategies except I find people tend to draft based on last year's results. What this does do however is arm the "shark" type player with the "understanding" of what his competition is thinking.

 
You're right that injury doesn't seem to be the most important variable with WR, although it's not insignificant. With a few notable exceptions (Boldin & Holt), the top WR's in 2005 played virtually every game for their teams. There was a bit of a handcuff effect (Bryant Johnson, Kevin Curtis), but the points for an injured WR tend to get spread around to several different players or go up in smoke completely, whereas for RB they tend to land predominantly on one player.
Aren't you forgetting one Randy Moss? Or don't you believe his injury impacted his performance, even though he "played" most of the games?
I was talking about the WRs that ended up in the top 20, not those who were projected to be there in the first place. Sorry if I wasn't clear about that.
 
I wonder for both of these threads (WR and RB), how do the"Real Time FBG Rankings" match up to actual performance?Since those rankings are an average of more opinions, do theyend up getting you closer to actual results than the FBG projectionsor worse?

 
I wonder for both of these threads (WR and RB), how do the

"Real Time FBG Rankings" match up to actual performance?

Since those rankings are an average of more opinions, do they

end up getting you closer to actual results than the FBG projections

or worse?
Yes! Great question!
 
I wonder for both of these threads (WR and RB), how do the

"Real Time FBG Rankings" match up to actual performance?

Since those rankings are an average of more opinions, do they

end up getting you closer to actual results than the FBG projections

or worse?
Yes! Great question!
A good question indeed. Since my main interest at the moment is in evaluating the information on which you base draft decisions, rather than WDIS decisions, questions like this are much further down on my list.There are at least two questions here, if I'm understanding your post correctly. (1) On a weekly basis, how well do FBG in-season projections correspond to actual performance that week? (2) Aggregated over a season, how well do FBG in-season projections match actual performance? I would suspect (though only suspect) that the answer to (2) is quite well. Of course, the answer to (1) is more useful to those of us who refer to that information in making decisions, although it could be a bit tricky to find a good way of measuring how useful it is overall.

 
I wonder for both of these threads (WR and RB), how do the

"Real Time FBG Rankings" match up to actual performance?

Since those rankings are an average of more opinions, do they

end up getting you closer to actual results than the FBG projections

or worse?
Yes! Great question!
A good question indeed. Since my main interest at the moment is in evaluating the information on which you base draft decisions, rather than WDIS decisions, questions like this are much further down on my list.
I think he's talking about this as opposed to this
 
Last edited by a moderator:
Nice work Bob. I'll take a closer look when I get some time. I really want to dig deep into the data. Thanks for the analysis. :thumbup:

 
I wonder for both of these threads (WR and RB), how do the

"Real Time FBG Rankings" match up to actual performance?

Since those rankings are an average of more opinions, do they

end up getting you closer to actual results than the FBG projections

or worse?
Yes! Great question!
A good question indeed. Since my main interest at the moment is in evaluating the information on which you base draft decisions, rather than WDIS decisions, questions like this are much further down on my list.
I think he's talking about this as opposed to this
Yes, that's exactly what I'm talking about...the PRE-SEASON "Real-time Consensus Rankings" that I like to use in conjunction with the pre-season projections that Joeand David do. I'm not talking about the weekly cheatsheets.

It would be great to see these pre-season consensus rankings compared with

last year's stats against actual end-of-year rankings. I'd be forever in your debt

if you could run that through your super-computer :)

thanks!

 
I wonder for both of these threads (WR and RB), how do the

"Real Time FBG Rankings" match up to actual performance?

Since those rankings are an average of more opinions, do they

end up getting you closer to actual results than the FBG projections

or worse?
Yes! Great question!
A good question indeed. Since my main interest at the moment is in evaluating the information on which you base draft decisions, rather than WDIS decisions, questions like this are much further down on my list.
I think he's talking about this as opposed to this
Yes, that's exactly what I'm talking about...the PRE-SEASON "Real-time Consensus Rankings" that I like to use in conjunction with the pre-season projections that Joeand David do. I'm not talking about the weekly cheatsheets.

It would be great to see these pre-season consensus rankings compared with

last year's stats against actual end-of-year rankings. I'd be forever in your debt

if you could run that through your super-computer :)

thanks!
Ah, my apologies.Since this will require doing some things from scratch, and to address a comment by Bass 'n Brew in the other thread, could someone please post what they consider to be a "standard" scoring system?

If you're feeling psychic you might just post the one the contributors had in mind when they were ranking these players in the first place.

 
Yes, that's exactly what I'm talking about...the PRE-SEASON "Real-time Consensus Rankings" that I like to use in conjunction with the pre-season projections that Joe

and David do. I'm not talking about the weekly cheatsheets.

It would be great to see these pre-season consensus rankings compared with

last year's stats against actual end-of-year rankings. I'd be forever in your debt

if you could run that through your super-computer :)

thanks!
Ah, my apologies.Since this will require doing some things from scratch, and to address a comment by Bass 'n Brew in the other thread, could someone please post what they consider to be a "standard" scoring system?

If you're feeling psychic you might just post the one the contributors had in mind when they were ranking these players in the first place.
I'd define standard scoring as:1pt/25yds passing

4pt/passing TD

1pt/10yds rushing/receiving

6pt/rushing/receiving TD

I wouldn't say that PPR is standard (yet) or 6pt/passing TD, IMO.

my 2 cents.

 
Last edited by a moderator:
Fantasy Points = (Pass Yards)/20 + (Rush Yd + Rec Yd)/10 + (Pass TDs)*4 + (Rush TDs + Rec TDs)*6 - (INTs)*1Standard FBG Scoring

 

Users who are viewing this thread

Back
Top