What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

Quality of FBG 2005 projections (1 Viewer)

bobspruill

Breathe deeply
DISCLAIMERS:

This is not meant as criticism of FBG at all. I've used their projections for 4 years now with good success. The purpose here is to assess risks in depending upon FBG projections by determining where they are strong and where they are weak. This analysis only concerns 2005 projections, and as such the sample sizes are necessarily small. The comparisons made here are to the alternative method of using prior year's performance; this is not a competitive analysis of FBG to other sites' projections.

CONCLUSIONS:

(1) FBG, on balance, underestimates the consistency of the previous year's top 15 RB's.

(2) FBG, on balance, does a good job of assessing the value of new starters--those who are either changing teams or taking over a starting role--in the range 15-30. This includes rookie RB's.

(3) FBG projections are excellent at assessing RB rankings in the range 15-30 overall.

(4) Some combination (still to be investigated) of FBG projections and past year's performance is likely to provide a better predictor of RB ranking than either measure alone.

METHODS & DETAILED FINDINGS:

The focus here is on the rankings arrived at when stats are passed through the following scoring system:

1pt/5 yds rushing + 1pt/5 yds receiving + 1 PPR + 1pt/10 yds passing + 6 pts per TD passing or rushing or receiving - 3 pts per interception or fumble

The analysis was restricted to the top 40 RB's identified by using FBG stat projections with this scoring system. An initial comparison was made between the projected rank of these backs and their actual rank after week 16 of the 2005 regular season.

The correlation coefficient between projected rank and actual rank within this data set was 0.55. Conventionally, this is interpreted to mean that FBG rankings accounted for roughly 30% of the variability in RB rankings. The RMS difference between projected and actual rank was roughly 24. This may seem high, but realize that we're only talking about the top end of the rankings here, and so the error is definitely not unbiased. That is, there's a lot more room to miss on the low side with these projections than on the high side.

With that in mind, it seems remarkable that, of the 40 backs, only 24 had actual ranks lower than their projected ranks; the other 16 all wound up ranked higher. Of those 16, 11 were projected to fall in the range of 15-30.

It seems clear that underestimating the value of a back in this range is far less serious an error than overestimating it. Experienced FFL players know that it's the choice of a second or third RB that often determines your team's fortunes. This is an exceptionally useful place to be picking backs who perform better than expected.

The link below gives a graphical representation of differences between FBG projected ranks and actual ranks under the given scoring system. The FBG rankings are along the x-axis, and the red "perfect" line shows what the graph would look like if FBG had predicted the ranking exactly. Thus, overestimations of value occured where the blue graph is above the "perfect" line, underestimations where it is below "perfect."

FBG projected RB rank versus actual rank

The same methods were used to analyze prior year's performance as a ranking method. Interestingly, the correlation coefficient here was higher among the top 40: 0.58, which we interpret to mean that prior year's rank accounted for roughly 35% of the variability in this year's ranking. This suggests that some method of averaging or combining the FBG projections with the prior year's ranking may yield a still higher correlation, even if the two (as seems certain) are not independent.

This possibility seems even more likely when you examine where prior year's rank was good as a predictor, and where it was bad. The overall RMS difference in prior rank (among last year's top 40) versus this year's rank was 41--dramatically higher than FBG's projections. Prior year's rank turned out to be a better predictor than FBG in the range 1-15 but a great deal worse from 16 on down. Thus, using prior year's rank to temper FBG projections at the top end would, this year at least, have been a smart thing to do.

Despite its superior performance at the top, it's also worth noting that prior year's rank was far more likely to overestimate value than underestimate it. Among last year's top 40, 28 declined in rank, 3 (2 of the top 15) stayed in exactly the same place, and only 9 rose.

Here's the graphical representation of differences in last year's rank versus this year's, plotted on the same scale as that used in the graph above:

Prior year's RB rank versus actual rank

It might well be said that difference in rank is not the most important measure of how good a set of fantasy predictions really is. After all, the difference between RB1 and RB5 is typically much larger, in terms of point production, than the difference between RB31 and RB35. Thus, average measures of error in predicting rank obscure what you might call the "consequential difference" in ranks--a measure that depends upon point production.

In an effort to investigate these differences without utterly losing the ranking concept, the following method was used: actual point production was compared to the point production of the back who wound up at that back's projected rank. For example, under FBG projections, LT was the top-ranked back; he has accumulated 516 points so far. In actuality, Alexander is the top-ranked back (the back you thought you were getting when you drafted LT); he has accumulated 544 points so far. Thus, the consequential difference in the LT ranking is -28: you missed out on 28 points by thinking that the #1 back was LT, when in fact it was Alexander.

Finally, in an effort to make these numbers scale-neutral (which is not to say scoring-neutral, although it comes closer), the differences were taken as a percentage of a baseline production: that of the #40-ranked back (Antowain Smith: 147). That makes the consquential difference in the LT ranking -18%; for comparison's sake, the consequential difference in the ranking of Deuce McAllister this year was -260%.

Over the entire top 40, FBG and prior year's rank look pretty comparable this way. FBG projections yieded an RMS consequential difference of 85%, while prior year's rank yielded an RMS consequential difference of...85%. The interesting results arise when you segment the samples.

Over the top 10 RBs, the RMS consequential difference for FBG was 115%; for prior year's rank, it was 75%--about a third less.

If you capture the top 15, it's 105% for FBG and 78% for the prior year. This is still a pretty large difference, and the bias of the errors is in favor of prior year's rank: FBG overestimated in 11 of the 15 cases, while prior year's rank only overestimated in 9. In the case of the FBG overestimations, 7 fell completely outside the top 15; for prior year's ranking, that number was only 5. This seems fairly good evidence to support the conclusion that being in the top 15 last year was a better predictor of value than was appearing in the FBG top 15.

On the other hand, in the range 15-30, FBG was by far the better performer. The RMS consequential error for FBG projections was 75%, while for prior year it was 91%. This may not look so large until you realize that this is the range where scoring production tends to bunch up, so a 30% difference in predictive value here is more dramatic than it is over 1-15.

As was indicated above, however, it's the bias of these errors where FBG projections really shine. Of the 16 backs included in each sample, FBG overestimated in only 6 cases, whereas prior year's ranking overestimated in 11. In two illustrative cases (Nick Goings, Onterrio Smith), the overestimation was by more than 100%.

Below 30, as you can imagine, prior year's rankings are mostly hopeless. FBG's aren't great, but they're certainly better.

Here, for the sake of illustration, are graphs of consequential ranking error for FBG and prior year by projected rank:

Consequential rank difference: FBG

Consequential rank difference: prior year

Finally, I was interested in how FBG performed in two types of cases where prior year's rankings are likely to be bad: for rookies, and for backs who are newly taking over a starting job after having started none or only some of the team's games the previous year. Here are the FBG top-30 backs I'd classify this way:

Willis McGahee (7), Kevin Jones (9), Julius Jones (12), Steven Jackson (13), Lamont Jordan (19), Mike Anderson (21), JJ Arrington (22), Cadillac Williams (26), Ronnie Brown (30)

There were certainly other hard calls (Priest vs. LJ), and there were many more in the 30-40 range, but I wanted to look in this area, where some of the hardest and most significant projection decisions are made, and stay away from projection questions that involve predicting injury.

In terms of raw ranking difference, FBG overestimated in 4 of the 9 cases. There were three serious errors in terms of consequential value: K. Jones (-118%), Arrington (-95%), and Lamont Jordan (+105%). On the other hand, the only further error over 50% in either direction was McGahee (-63%).

What can best be learned from these facts is to temper FBG projections in these "hard" cases when the resulting ranking is better than 20th. Only 1 of the 4 overestimations occurred below that point, and all but 1 of the serious errors occurred above it. Jordan, of course, highlights the potential rewards in taking a flyer, but it looks as though the risks outweigh the benefits in the top 15 at least.

 
This probably got lost in the attempts of those of us scrambling to find a plug in QB or RB to do so.I thought this was a high quality analysis, and hope we see more of it.Wondering what your qualitative reasoning is for the difference in predictive ability between FBG's and prior year's rank?My initial thought is that since prior year's rank is "dumb," that dumbness will be more damaging to its ability to predict next year in the 15 and up range. Because we can use our human reasoning to determine whether Nick Goings is likely to repeat his performance (or some injured RB in 2004 vs. in 2005). Players who finished in the top 15 are much less likely to have been injured; to have been a fill-in for an injured player; or to have appeared out of nowhere (this seems to happen at mid-season, not game 2).This doesn't explain why FBG's isn't so good in the top 15. I suppose it's "hope springing eternal" with folks like Tatum Bell and Kevin Jones.Please keep it up!

 
Last edited by a moderator:
This probably got lost in the attempts of those of us scrambling to find a plug in QB or RB to do so.

I thought this was a high quality analysis, and hope we see more of it.

Wondering what your qualitative reasoning is for the difference in predictive ability between FBG's and prior year's rank?

My initial thought is that since prior year's rank is "dumb," that dumbness will be more damaging to its ability to predict next year in the 15 and up range. Because we can use our human reasoning to determine whether Nick Goings is likely to repeat his performance (or some injured RB in 2004 vs. in 2005). Players who finished in the top 15 are much less likely to have been injured; to have been a fill-in for an injured player; or to have appeared out of nowhere (this seems to happen at mid-season, not game 2).

This doesn't explain why FBG's isn't so good in the top 15. I suppose it's "hope springing eternal" with folks like Tatum Bell and Kevin Jones.

Please keep it up!
I'll post more along these lines if there's interest. Going on the audible "thunk" when it was first posted, I'd decided to do further analysis on down the line for myself but not clutter the board with it.The top-15 result was the one that interested me the most as well. Injuries explain why most of the top-15 backs who fell weren't there again this year, and of course the folks who do FBG projections know that. Any projection on a veteran player has to include a discount based on the estimated likelihood of injury and the injury's severity.

It strikes me that the major error FBG made in the top-15 picks was projecting young-ish players to experience fewer injuries than they had the previous season. When you look at backs whose performance was significantly affected by injury in 2004, most also had their production limited by injury this year--and my qualitative impression is that the majority of these backs wound up performing the same or worse than they had in the prior year.

Of course there are exceptions, but I think the safer strategy in making projections is not to expect a back who was hampered by injury in one year to shake it the following year--not even the young ones. What I see is a situation in which the majority of backs experience gradually increasing injury problems over time, no matter what their age.

 
Great thread.Nobody likes to read through "FBG was wrong" threads, but this one is different. If we (OK, you) can identify where FBG was wrong and find a pattern, that could be a critical piece of information to have next year.

 
I'd be most interested in seeing this done on WRs. I'd think/hope the FBG rakings of the top 30 would be better than prior year.

 
Thanks.If you get more "thunks" here in the shark pool, you might try the applications forum. There are fewer posters there (because there usually aren't many posts to respond to, except in draft season when everyone is focused on drafting methodologies).However, if you want people who are into statistics, you will find them there.

 
I'd be most interested in seeing this done on WRs. I'd think/hope the FBG rakings of the top 30 would be better than prior year.
My brother-in-law (co-owner) and I are planning to focus on "old guy" WRs in next year's draft. We've been accidentally following this the past couple of years and it has worked well. I suppose these old guys are falling to the middle rounds, where they interest us, because the other owners are slobbering over the new and improved versions and their "potential."Anyway, I'm curious whether we are just getting lucky with our particular old guys or if there is traditional value to be had in rounds 6-10 among old guys in general. We had Galloway (waiver pickup) and Rod Smith all year, and they were solid. Plus an accident in TJ and a "potential guy" that never produced (Reggie Brown) until the game we needed him (the championship), when we had given up on him and didn't dream of starting him.

 
great post and thanks for the bump or else many of us would have missed it.I wonder if you averaged the 2005 FBG projections with 2004 performance, how much more accurate would the projections have been? also, is there any way to try and reduce the impact of serious injuries (i.e., players placed on IR)? For example, I wonder if you did a similar analysis on PPG, if FBG projections would do better than past performance.Also, in terms of improving projections for 2006, what conclusions can you draw from this study?

 
Great post Bob. I have two recommendations for you. First would be to convert this standard scoring. Secondly, I would suggest using ppg performance to limit the impact of injuries since they generally can't be predicted and fantasy players can be replaced during the season. Did FBG really miss on McAlister or was he performing at the predicted level when he was playing?

 
Wow. This has to be one of the better analyses I have seen around here in a long, long time. Very nice work. Keep it coming.

 
great post and thanks for the bump or else many of us would have missed it.

I wonder if you averaged the 2005 FBG projections with 2004 performance, how much more accurate would the projections have been? also, is there any way to try and reduce the impact of serious injuries (i.e., players placed on IR)? For example, I wonder if you did a similar analysis on PPG, if FBG projections would do better than past performance.

Also, in terms of improving projections for 2006, what conclusions can you draw from this study?
Finding a method of averaging to improve the correlation is the next step after I get a similar analysis done on WR. If I can get something much better by combining FBG with past performance, I'll post it in this thread.As for PPG comparisons...I haven't looked into it, but I believe total points is the most useful thing to study for this area of the draft. When you spend a first-round draft pick, you're not just buying production for the time a guy is on the field; if he's RB1 for weeks 1-6 and nobody for the rest of the season, that's a liability that PPG totally ignores. Part of the usefulness of projections is that they include a built-in estimate of the likelihood of that happening. While PPG is probably the most useful measure for in-season decisions, it still seems to me that total point production is the best one for draft purposes.

 
Great post Bob. I have two recommendations for you. First would be to convert this standard scoring. Secondly, I would suggest using ppg performance to limit the impact of injuries since they generally can't be predicted and fantasy players can be replaced during the season. Did FBG really miss on McAlister or was he performing at the predicted level when he was playing?
If someone can post a link to a good place to get downloadable individual stats that go into Excel easily, then I'll put that on the list of things to look into.For this analysis, I relied heavily on the myfantasyleague player reports from this particular league. It's convenient for learning things about that league, but recooking the numbers under alternative scoring systems isn't, I'm afraid.

 
hmmm...I will post a more detailed response if I get the time. I agree with some of the implications here.

 
Secondly, I would suggest using ppg performance to limit the impact of injuries since they generally can't be predicted and fantasy players can be replaced during the season. 
I still completely disagree with the pool's philosophy that "all players have the same injury risk."I would have bet my left arm that Tiki was going to start more games than Julius Jones this year...or that Favre would start more games than Vick or Bulger.

 
Last edited by a moderator:
great post and thanks for the bump or else many of us would have missed it.

I wonder if you averaged the 2005 FBG projections with 2004 performance, how much more accurate would the projections have been? also, is there any way to try and reduce the impact of serious injuries (i.e., players placed on IR)? For example, I wonder if you did a similar analysis on PPG, if FBG projections would do better than past performance.

Also, in terms of improving projections for 2006, what conclusions can you draw from this study?
Finding a method of averaging to improve the correlation is the next step after I get a similar analysis done on WR. If I can get something much better by combining FBG with past performance, I'll post it in this thread.As for PPG comparisons...I haven't looked into it, but I believe total points is the most useful thing to study for this area of the draft. When you spend a first-round draft pick, you're not just buying production for the time a guy is on the field; if he's RB1 for weeks 1-6 and nobody for the rest of the season, that's a liability that PPG totally ignores. Part of the usefulness of projections is that they include a built-in estimate of the likelihood of that happening. While PPG is probably the most useful measure for in-season decisions, it still seems to me that total point production is the best one for draft purposes.
while I generally agree with this, it doesn't take into account the fact that when a player is injured, you can insert another player into the lineup. So, even if you drafted Deuce #5 overall...you don't actually wind up with whatever his total points were at one of your RB positions. Instead, you get his points for the first weeks he was healthy, and then the points of whatever RB you plug in after he goes on IR. There are almost always going to be productive players available on the waiver wire. Deuce's final ranking of 54th in total points doesn't really seem accurate as anyone who owned him would have dumped him. I wonder if there is some adjustment we could make to the points for these players (i.e., add on average ppg from RB50 or something) to try and more accurately reflect the performance of that player/roster spot and its contribution to one's team.anyway, just throwing out some ideas here. Just seems like there is a important difference between players that go on IR who you can replace vs players who miss a game here or there but still take up a roster spot that doesn't get reflected in this type of analysis. I don't think PPG is perfect either...just think it might be useful to try and take into account in some way. As another example, Priest Holmes finished as the #32 ranked RB in total points, but he was the #7 ranked RB in PPG. So, a top-4 projection wasn't really as far off for him as it seemed if you only look at total points.

 
great post and thanks for the bump or else many of us would have missed it.

I wonder if you averaged the 2005 FBG projections with 2004 performance, how much more accurate would the projections have been? also, is there any way to try and reduce the impact of serious injuries (i.e., players placed on IR)? For example, I wonder if you did a similar analysis on PPG, if FBG projections would do better than past performance.

Also, in terms of improving projections for 2006, what conclusions can you draw from this study?
Finding a method of averaging to improve the correlation is the next step after I get a similar analysis done on WR. If I can get something much better by combining FBG with past performance, I'll post it in this thread.As for PPG comparisons...I haven't looked into it, but I believe total points is the most useful thing to study for this area of the draft. When you spend a first-round draft pick, you're not just buying production for the time a guy is on the field; if he's RB1 for weeks 1-6 and nobody for the rest of the season, that's a liability that PPG totally ignores. Part of the usefulness of projections is that they include a built-in estimate of the likelihood of that happening. While PPG is probably the most useful measure for in-season decisions, it still seems to me that total point production is the best one for draft purposes.
while I generally agree with this, it doesn't take into account the fact that when a player is injured, you can insert another player into the lineup. So, even if you drafted Deuce #5 overall...you don't actually wind up with whatever his total points were at one of your RB positions. Instead, you get his points for the first weeks he was healthy, and then the points of whatever RB you plug in after he goes on IR. There are almost always going to be productive players available on the waiver wire. Deuce's final ranking of 54th in total points doesn't really seem accurate as anyone who owned him would have dumped him. I wonder if there is some adjustment we could make to the points for these players (i.e., add on average ppg from RB50 or something) to try and more accurately reflect the performance of that player/roster spot and its contribution to one's team.anyway, just throwing out some ideas here. Just seems like there is a important difference between players that go on IR who you can replace vs players who miss a game here or there but still take up a roster spot that doesn't get reflected in this type of analysis. I don't think PPG is perfect either...just think it might be useful to try and take into account in some way. As another example, Priest Holmes finished as the #32 ranked RB in total points, but he was the #7 ranked RB in PPG. So, a top-4 projection wasn't really as far off for him as it seemed if you only look at total points.
True enough. One reasonable estimate of the Priest pick's PPG value would be a weighted average of Priest over the games he played with some nonzero baseline value over the games he didn't. This still doesn't capture many of the complexities involved, but it does get closer. It would be interesting to look into_Of course, you might say that using that as a way of measuring the quality of projections actually circumvents the projections somewhat. What I was after in doing this analysis was a way of assessing the projections as they would be used during a fantasy draft. In order to use the projections in the way this average implies, you'd have to make your own additional projection in advance of the draft about how many weeks the player is likely to play. In an ideal world, this factor should be bound up in the projection itself.

 
In order to use the projections in the way this average implies, you'd have to make your own additional projection in advance of the draft about how many weeks the player is likely to play. In an ideal world, this factor should be bound up in the projection itself.
Like a "risk factor" perhaps...I've been saying this for years. :wall:
 
For example, I wonder if you did a similar analysis on PPG, if FBG projections would do better than past performance.
analysis on PPG in some ways is more relevant in H2H leagues where you can choose weekly starters. Annual projection analysis is obviously better suited for survivor formats.
Also, in terms of improving projections for 2006, what conclusions can you draw from this study?
Basically that the ff industry(not just fbg) has been inaccurately capturing risk. Which is one reason why Bob's findings show that 2004 actuals should have greater weight in projections, i.e. a player's PROVEN statisitical reliability should have greater weight.One thing to consider though, is that relying too much on proven statistics will many times not give you the upside to finish first in a league of 12. So while Bob's implications are that you should more heavily weight your projections with prior year actuals, in reality you could end up taking 4th place every year because you didn't draft the upside necessary, or take on the necessary risk.

 
Last edited by a moderator:
Basically that the ff industry(not just fbg) has been inaccurately capturing risk. Which is one reason why Bob's findings show that 2004 actuals should have greater weight in projections, i.e. a player's PROVEN statisitical reliability should have greater weight.

One thing to consider though, is that relying too much on proven statistics will many times not give you the upside to finish first in a league of 12. So while Bob's implications are that you should more heavily weight your projections with prior year actuals, in reality you could end up taking 4th place every year because you didn't draft the upside necessary, or take on the necessary risk.
Great analysis and great discussion going on in this thread. I find this stuff so much more interesting than WDIS drivel.....Anyway, I think LHUCKS is spot on with risk assessment. I know that we discussed this quite a bit after last season. Discussed portfolio theory among other topics as a way to minimize risk. I know that LHUCKS was doing some things on the side with this, but don't know if anything came out of it.

But regardless. Risk is one of the major frontiers in FF that has yet to be fully explored and implemented. There is only but so much accuracy that can be put into projections. Based on Bob's analysis, it appears that weighting last year's results can help improve projections. But even with that done, there is still a substantial amount of inaccuracy. Figuring out how to manage that inaccuracy more effectively will give a player a decided advantage.

 
Without taking too much of my "work time" to look at the numbers, a quick glance seems to indicate that the biggest portion of the FBG error was directly attributable to injuries. Take away the portion of the error for Deuce, KJ, Priest, Green, Staley and Suggs (and the resultant increase for related players like LJ and Droughns) and the FBG error is going to be negligible. The previous year stats may provide some form of "risk assessment" value factor to consider by lowering expectations for guys injured in 2004 (and thus, maybe more likely to be injured in 2005)?

 
Hey guys,I think that risk analysis should be even broader than just injury risk - there are several other factors that a wise fantasy owner will include as risk factors when making up a draft list. Here's a link to an article I rewrote/updated/expanded on about Managing Risk during the pre-season this year:My Take on Risk Management

Fantasy football is about more than just statistics and projections. While raw data about the players' anticipated on-field performance is useful and can help us on draft day in the selection process (once compiled and sorted), it doesn't paint the whole picture. Beyond what happens on game-day, we need to remember that players are members of a team - a political structure composed of human beings, with a dictator the top of the heap (usually the head coach). In addition, each player has behavioral/mental dimensions that may affect the level of success he achieves in the course of any given NFL season. In order for your fantasy franchise to triumph in any given season, you need to manage the "intangible" risks that have bearing on your players' chances for success. What kind of risks can drag down your franchise's players? The list includes (but is not limited to): Chronic Injury; Attitude Problems; Coach's Doghouse (includes holdouts); Positional Battles (including the dreaded by-committee specter); Switching to a New Team; New Coaching Staff on Players' Team; Offensive Line Problems; Off Field Problems (Reported Substance Abuse/Legal Issues/League Suspensions); and finally, Drafting Too Many Players from One Team.
It goes on from there, but since it's in the premium/subscription section I won't excerpt the whole article. While my approach, as outlined in this article, is more qualitative than quantitative, I thought it was worth throwing into this discussion as the notion of Risk Management has become part of this thread. Take a look at it and see what ideas arise for you. My .02.
 
Excellent topic. Constructive critism and discussion can only help. One thing I've learned over the years is not to rely on one set of projections or analysis'. I use many sources to put my list together along with my own analysis. I do like to go back after the season and check my lists and some of the others to see where I was 'right or wrong'. Also, these projections should not be used in a vacuum. They should be fluid and everchanging throughout the year.

 
Excellent topic. Constructive critism and discussion can only help.

...I do like to go back after the season and check my lists and some of the others to see where I was 'right or wrong'....
This is what I'm after here, with the hope of finding some sort of pattern. In regions where the projections appear to be very strong, next year I'll be more willing to let go of whatever qualms I may have about a player in that region. In regions where the numerical projections are less strong, I'll be correspondingly more likely to rely on my perceptions of risk and adjust the rankings on the fly.
 
I'm having a hard time agreeing with the implied notion that a projection was "missed" because injury caused a player not to put up the numbers projected. When I see your chart, it looks like the players who seriously underperformed the season total point projections were D McAllister P Holmes K Jones A Green D Staley L Suggs JJ Arrington C Benson T Henry Except for Arrington and Benson, all were largely related to significant injuries (K Jones played through a series of injuries including a separated shoulder). The players who appeared to most significantly out perform projections were T Barber R Johnson L Johnson R Droughns L Jordan T Jones Holmes' and Suggs' injuries directly affected the increase in LJ and Droughns. Barber and Johnson moved up in part because of attrition at the top -- dropping out a couple of players ahead of someone already ranked in the top 10 makes it appear that they had significant out performance of projections, because of the way you are using ranking spaces as constant units of measurement. It seems to me there were only two major projections missed -- JJ Arrington being a total bust and Benson failing to take any playing time from Thomas Jones. Those are inherent risks in trying to project rookie performance, though. (You could also argue that Holmes and Suggs had injury histories and their projections should have been tempered a little more.) I don't see a pattern in the chart showing something that can be corrected. Projecting season point totals is a combination of (1) projecting PPG productivity and (2) tweaking it to account for the chance of individual injury, injury to teammates, loss of job or touches due to competition, etc. Seems to me that FBG is doing a good job of (1) and mostly a good job of (2).

 
While my approach, as outlined in this article, is more qualitative than quantitative,
Too me, this is the problem and this is what makes fantasy football challanging. Risk in the end is a "gut feeling" base on past history and current changes for the players. Each person is going to assign risk differently. For example...some people did not see Holmes as a big risk this year, while others thought he would play about 2 games. Of course the answer was somewhere in the middle, but I don't see anyway you can ever quantify risk in fantasy football. Some people will be right and others wrong and that is what makes fantasy football fun. I agree with an earlier post that if you minimize risk too much you will probably finish 4th every year.
 
I came in the bottom three of the staff in our preseason rankings versus on-field performance for RBs - we tracked it internally all year.:bag:I was in the middle on QBs and TEs and was in the top 3 in WRs. :pickle:

 
Last edited by a moderator:
I'm having a hard time agreeing with the implied notion that a projection was "missed" because injury caused a player not to put up the numbers projected.

When I see your chart, it looks like the players who seriously underperformed the season total point projections were

D McAllister

P Holmes

K Jones

A Green

D Staley

L Suggs

JJ Arrington

C Benson

T Henry

Except for Arrington and Benson, all were largely related to significant injuries (K Jones played through a series of injuries including a separated shoulder).

The players who appeared to most significantly out perform projections were

T Barber

R Johnson

L Johnson

R Droughns

L Jordan

T Jones

Holmes' and Suggs' injuries directly affected the increase in LJ and Droughns. Barber and Johnson moved up in part because of attrition at the top -- dropping out a couple of players ahead of someone already ranked in the top 10 makes it appear that they had significant out performance of projections, because of the way you are using ranking spaces as constant units of measurement.

It seems to me there were only two major projections missed -- JJ Arrington being a total bust and Benson failing to take any playing time from Thomas Jones. Those are inherent risks in trying to project rookie performance, though. (You could also argue that Holmes and Suggs had injury histories and their projections should have been tempered a little more.)

I don't see a pattern in the chart showing something that can be corrected. Projecting season point totals is a combination of (1) projecting PPG productivity and (2) tweaking it to account for the chance of individual injury, injury to teammates, loss of job or touches due to competition, etc. Seems to me that FBG is doing a good job of (1) and mostly a good job of (2).
Yeah, but when you look closely at these big differences, I think a pattern does emerge. Leaving out the rookies, look at these numbers for the overestimates you listed:Deuce (FBG #4, 2004 #18, 2005 #51)

Priest (FBG #8, 2004 #20, 2005 #31)

K Jones (FBG #9, 2004 #21, 2005 #33)

A Green (FBG #14, 2004 #11, 2005 #57)

Staley (FBG #33, 2004 #36, 2005 #85)

Suggs (FBG #37, 2004 #31, 2005 #126)

Henry (FBG #40, 2004 #73, 2005 #62)

In only two of these cases was FBG closer to the actual 2005 rank than 2004 rank was. In virtually all of these cases (Green and possibly Deuce excepted), their 2004 performance was seriously limited by injury. FBG rankings of these players envisioned that 2005 would be different for them. It wasn't.

Similarly, take a look at the ones who performed significantly better:

Tiki (FBG #10, 2004 #1, 2005 #2)

Rudi (FBG #17, 2004 #10, 2005 #8)

LJ (FBG #29, 2004 #27, 2005 #4)

Droughns (FBG #35, 2004 #12, 2005 #12)

Jordan (FBG #19, 2004 #43, 2005 #6)

Thomas Jones (FBG #27, 2004 #13, 2005 #11)

Jordan was the only case here where FBG proved a better guide than prior year; on LJ the two calls were about the same. All of these backs were in situations that were in flux. It just seemed interesting to me that, here again, the consistency of veteran backs was underestimated.

Now there's a serious self-selection problem here--we're focusing on the ones that missed, and so we're not seeing all the cases where FBG projections proved good. Let me just say again (and again) that my purpose here is not to say that FBG projections are anything other than great. I wouldn't use them if I didn't think they were great. It just seems to me that changes at the top of the RB list aren't quite as dramatic on a year-by-year basis as the projections anticipate them to be.

 
While my approach, as outlined in this article, is more qualitative than quantitative,
Too me, this is the problem and this is what makes fantasy football challanging. Risk in the end is a "gut feeling" base on past history and current changes for the players. Each person is going to assign risk differently. For example...some people did not see Holmes as a big risk this year, while others thought he would play about 2 games. Of course the answer was somewhere in the middle, but I don't see anyway you can ever quantify risk in fantasy football. Some people will be right and others wrong and that is what makes fantasy football fun. I agree with an earlier post that if you minimize risk too much you will probably finish 4th every year.
I actually have a quantitave process, but have not perfected it. I've been tweaking it every year, but I'm still not there yet. This was my best year to date after 15 years of playing so I may be getting close.

 
Last edited by a moderator:
I came in the bottom three of the staff in our preseason rankings versus on-field performance for RBs - we tracked it internally all year.

:bag:

I was in the middle on QBs and TEs and was in the top 3 in WRs.

:pickle:
The money is at WR.My guess is Wood was near the top.

 
It just seems to me that changes at the top of the RB list aren't quite as dramatic on a year-by-year basis as the projections anticipate them to be.
I would caution against making too many conclusions like this based on one year's worth of data. Do we know that the turnover from 2004 to 2005 was similar to the turnover experienced in previous years? Maybe this year was unusual in the number of RBs that finished high in both 2004 and 2005.Since you seemed to use top-15 as the cutoff point, I wonder how common it is for RBs to finish in the top-15 in years N and N+1 over the past decade or so.

 
I came in the bottom three of the staff in our preseason rankings versus on-field performance for RBs - we tracked it internally all year.

:bag:

I was in the middle on QBs and TEs and was in the top 3 in WRs.

:pickle:
The money is at WR.My guess is Wood was near the top.
I usually feel like I do a great job with my WRs, but this year I was apparently way off for some reason.After 16 weeks, my preseason rankings ranked 12th in QBs, 2nd in RBs, 16th in WRs, and 2nd in TEs. Pretty solid, and I had a pretty good year in most of my leagues as a result.

Top rankings by position were: QBs - Hicks, RBs - Henry, WRs - Shick, TEs - Baker.

 
I came in the bottom three of the staff in our preseason rankings versus on-field performance for RBs - we tracked it internally all year.

:bag:

I was in the middle on QBs and TEs and was in the top 3 in WRs.

:pickle:
The money is at WR.My guess is Wood was near the top.
I usually feel like I do a great job with my WRs, but this year I was apparently way off for some reason.After 16 weeks, my preseason rankings ranked 12th in QBs, 2nd in RBs, 16th in WRs, and 2nd in TEs. Pretty solid, and I had a pretty good year in most of my leagues as a result.

Top rankings by position were: QBs - Hicks, RBs - Henry, WRs - Shick, TEs - Baker.
There are certain rankings that I favor at FBG. 2 of the 4 persons noted are those that I favor.

Good stuff. I like the fact you guys track this.

 
Last edited by a moderator:
I came in the bottom three of the staff in our preseason rankings versus on-field performance for RBs - we tracked it internally all year.

:bag:

I was in the middle on QBs and TEs and was in the top 3 in WRs.

:pickle:
The money is at WR.My guess is Wood was near the top.
I usually feel like I do a great job with my WRs, but this year I was apparently way off for some reason.After 16 weeks, my preseason rankings ranked 12th in QBs, 2nd in RBs, 16th in WRs, and 2nd in TEs. Pretty solid, and I had a pretty good year in most of my leagues as a result.

Top rankings by position were: QBs - Hicks, RBs - Henry, WRs - Shick, TEs - Baker.
Ruds, can you explain the methodology behind how these rankings were calculated?
 
Ruds, can you explain the methodology behind how these rankings were calculated?
Drinen did it.Here was his explanation (don't think anyone will mind me reposting here):
I decided to grade these rankings with the following procedure, using WRs as an example:1. Find all WRs who were ranked in the top 30* by somebody (EDIT: plus all WRs who are currently in the top 30). There are 50 WRs that meet this criteria. 2. Given 50 WRs, there are 1225 pairs of WRs, which means there are 1225 WDID decisions.3. For each staff member, look at each of those 1225 pairs. If you picked the right guy (as measured by total fantasy points), you get a point. If not, you don't. * - I used top 12 for QB, top 24 for RB, and top 12 for TE.
 
Ruds, can you explain the methodology behind how these rankings were calculated?
Drinen did it.Here was his explanation (don't think anyone will mind me reposting here):

I decided to grade these rankings with the following procedure, using WRs as an example:

1. Find all WRs who were ranked in the top 30* by somebody (EDIT: plus all WRs who are currently in the top 30). There are 50 WRs that meet this criteria.

2. Given 50 WRs, there are 1225 pairs of WRs, which means there are 1225 WDID decisions.

3. For each staff member, look at each of those 1225 pairs. If you picked the right guy (as measured by total fantasy points), you get a point. If not, you don't.

* - I used top 12 for QB, top 24 for RB, and top 12 for TE.
:thumbup: Interesting. It would be cool to see the actual numbers for each staff member.
 
Great analysis and discussion here guys!One question I have is where do I consider in my projections the week 15-17 playoff weeks. I generally do very well at picking the players necessary to get me to the playoffs, but then falter due to players being rested during my playoff weeks.As an example, if you had Tom Brady, Shaun Alexander and Marvin Harrison on your team, you fared pretty well and probably made the playoffs. However, when it came time for week 16-17 games, you were left playing the likes of Kyle Boller, Zack Crockett, and K. McCardell. Ouch!This happens to me a lot and very much so the last two seasons.I'm just looking for some different viewpoints. TIA

 
It just seems to me that changes at the top of the RB list aren't quite as dramatic on a year-by-year basis as the projections anticipate them to be.
I would caution against making too many conclusions like this based on one year's worth of data. Do we know that the turnover from 2004 to 2005 was similar to the turnover experienced in previous years? Maybe this year was unusual in the number of RBs that finished high in both 2004 and 2005.Since you seemed to use top-15 as the cutoff point, I wonder how common it is for RBs to finish in the top-15 in years N and N+1 over the past decade or so.
I can't do a quick check over that period of time, but I did look into the 2003 -> 2004 transition quickly.18 of 2003's top 30 RB's were in 2004's top 30, including 10 of 2003's top 15.

20 of 2004's top 30 RB's were in 2005's top 30, including 14 of 2004's top 15.

2004's numbers were much more useful for predicting relative rank within these groups in the following year than 2003's were. I know 2 years isn't a big sample (10 isn't either if you want to be a statistician about it), but there seem to be indications of some consistency here.

 
Last edited by a moderator:
Ruds, can you explain the methodology behind how these rankings were calculated?
Drinen did it.Here was his explanation (don't think anyone will mind me reposting here):

I decided to grade these rankings with the following procedure, using WRs as an example:

1. Find all WRs who were ranked in the top 30* by somebody (EDIT: plus all WRs who are currently in the top 30). There are 50 WRs that meet this criteria.

2. Given 50 WRs, there are 1225 pairs of WRs, which means there are 1225 WDID decisions.

3. For each staff member, look at each of those 1225 pairs. If you picked the right guy (as measured by total fantasy points), you get a point. If not, you don't.

* - I used top 12 for QB, top 24 for RB, and top 12 for TE.
:thumbup: Interesting. It would be cool to see the actual numbers for each staff member.
:thumbup: Like LHUCKS, I favor some rankings over others. I'd like to see if I'm off on the guys I trust more.

I'd also like to know for defenses and PKs - seems like Capybara's the man at PK, but I don't know if the numbers showed that to be the case.

 
Ruds, can you explain the methodology behind how these rankings were calculated?
Drinen did it.Here was his explanation (don't think anyone will mind me reposting here):

I decided to grade these rankings with the following procedure, using WRs as an example:

1. Find all WRs who were ranked in the top 30* by somebody (EDIT: plus all WRs who are currently in the top 30). There are 50 WRs that meet this criteria.

2. Given 50 WRs, there are 1225 pairs of WRs, which means there are 1225 WDID decisions.

3. For each staff member, look at each of those 1225 pairs. If you picked the right guy (as measured by total fantasy points), you get a point. If not, you don't.

* - I used top 12 for QB, top 24 for RB, and top 12 for TE.
:thumbup: Interesting. It would be cool to see the actual numbers for each staff member.
:thumbup: Like LHUCKS, I favor some rankings over others. I'd like to see if I'm off on the guys I trust more.

I'd also like to know for defenses and PKs - seems like Capybara's the man at PK, but I don't know if the numbers showed that to be the case.
Doug didn't run the numbers for PKs, DEF, or IDPs, but maybe he will when he gets some time.As for releasing the data to compare how we all did, I think you'd have to ask Drinen.

 
Ruds, can you explain the methodology behind how these rankings were calculated?
Drinen did it.Here was his explanation (don't think anyone will mind me reposting here):

I decided to grade these rankings with the following procedure, using WRs as an example:

1. Find all WRs who were ranked in the top 30* by somebody (EDIT: plus all WRs who are currently in the top 30). There are 50 WRs that meet this criteria.

2. Given 50 WRs, there are 1225 pairs of WRs, which means there are 1225 WDID decisions.

3. For each staff member, look at each of those 1225 pairs. If you picked the right guy (as measured by total fantasy points), you get a point. If not, you don't.

* - I used top 12 for QB, top 24 for RB, and top 12 for TE.
:thumbup: Interesting. It would be cool to see the actual numbers for each staff member.
:thumbup: Like LHUCKS, I favor some rankings over others. I'd like to see if I'm off on the guys I trust more.

I'd also like to know for defenses and PKs - seems like Capybara's the man at PK, but I don't know if the numbers showed that to be the case.
Doug didn't run the numbers for PKs, DEF, or IDPs, but maybe he will when he gets some time.As for releasing the data to compare how we all did, I think you'd have to ask Drinen.
and Joe and David - it is really their call.
 

Users who are viewing this thread

Top