What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Expert Rankings for Previous Years (1 Viewer)

Clearly there are a number of issues with the validity of pre-season predictions and year-end results - no argument there. Chase - are you saying (without actually saying it) that FBG will not make the previous years predictions and year end results available on an easy-to-download spreadsheet? Just askin'.ThanksBink
I have no idea what FBG will do. I don't believe FBG is adverse to doing it, but obviously for the staffers this is a pretty busy time right now for that type of project.I'm merely stating my opinion that the data are worthless if you don't know how to interpret them. There have been threads here where one projector was ranked as the best at ranking RBs, and then another person uses his system and that same projector ranked 2nd to last. That's not good, and why I think it would cause more harm than good to look too hard at one year's worth of data. I'm not even sure if 10 years worth of data is right, even if you have the correct system. Since we're going to have one year of data and no correct system, it seems like a waste of time. I want FBG to be accountable, but I don't see how you can compare FBG staff rankers in a meaningful and accurate way.It's not dissimilar from the threads that pop up about how half of the top 10 RBs get displaced every year. That information would be bad for you if it caused you to arbitrarily drop players out of your top 10 so that your rankings fit with past practice.
 
Last edited by a moderator:
Problems with analyzing past rankings

If anyone could come up with a scoring system that accurately rewarded the people whose projections were the best, I'd lead the charge on analyzing everyone's past projections/rankings (although I think a lot less interesting information would be gleaned from there than most). Here are some problems I see:

1. Staffer A thinks Adrian Peterson is awesome and Chester Taylor is terrible. He ranks Peterson #8, and Taylor #54. Staffer B thinks a bit differently, ranking Peterson at #30 and Taylor at #18. Staffer A and Staffer B both agree that Taylor has a 20% chance of getting injured, which is factored into their rankings. Taylor blows his ACL on the first play of the season. Taylor finishes as the 4th best RB. Whose projection was better? It seems to me that the issue went unsolved. Yet any scoring system would reward Staffer A significantly, and punish Staffer B heavily.

2. Staffer A thinks Willie Parker has a 10% chance of getting injured. Staffer B thinks Parker has a 20% chance of getting injured. Both staffers agree on Parker's production when healthy, so Staffer B has Parker ranked #10 and Staffer A has Parker ranked #6. Consider:

A) Parker does not get hurt, but plays poorly and ranks 20th. Staffer B seems to be unfairly rewarded.

B) Parker does not get hurt, but actually had a 25% chance of getting injured. Staffer B does not receive any credit for his superior job at gauging Parker's injury risk, and gets penalized when Parker stays healthy and ranks 6th. Staffer A seems to win unjustifiably so.

C) Parker does get hurt, but actually had just a 5% chance of getting injured.

3. Staffer A thinks the loss of Tarik Glenn is going to hurt Manning. Staffer B thinks it will not. Staffer A ranks Manning 3rd, Staffer A ranks Manning 1st. The loss of Glenn doesn't hurt Manning, but Marvin Harrison gets injured and Manning only finishes third. Staffer A seems to be unfairly credited with a victory.

4. Staffer A thinks Travis Henry stinks. Staffer B thinks anyone in Denver will do well. Staffer A ranks Henry 20th, Staffer B ranks Henry 5th. Henry plays at an incredible level for 8 weeks, then gets injured for the season and ranks 30th. His replacement plays at an incredible level for the last 8 weeks. Staffer B was right on two counts, Staffer A was wrong on his only one, yet Staffer A "wins".

5. Staffer A and Staffer B both agree that McNabb will average 24 FP/G. They both agree that McNabb will play 10 games. Staffer A ranks McNabb as if he projected him to score 240 points, and his him ranked 15th. Staffer B decides to add 15 FP/G for the remaining 6 games, because that's what a replacement level QB will score. Staffer B ranks McNabb as if he projected him to score 330 FPs, ranking him 3rd. McNabb averages 24 FP/G, gets injured after 10 games, and ends the season ranked 15th. Both staffers perfectly nailed what would happen. Yet Staffer B is not rewarded at all, despite actually having the more useful ranking for drafters.

6. Staffer B thinks that Marshawn Lynch will be a stud down the stretch, and lead teams to fantasy glory. He ranks him 15th, thinking he'll be 30th for most of the season, but a RB1 when it counts. Staffer B thinks he'll probably finish around 25th. Staffer A thinks Lynch will be the same all year, and also ranks him 25th. Staffer B's prediction comes perfectly to life, and many teams with Lynch win their championship. But Lynch ranks 25th, and Staffer A wins despite being less accurate with his guess as to what will happen than Staffer A.
Chase,First, thanks for participating in this particular conversation. Your points are well-taken. I agree that there will be many situations in which, as in your examples, A's prediction is no better than (and may be much worse than) B's prediction, yet Staffer A gets a lucky "win".

However, this would seem to be where sample size is so very important. Over time, that luck/randomness factor would be reduced significantly, as Staffer B would presumably be getting his share of those lucky "wins" over time as well. Given enough data to compare over a number of years, some helpful (albeit not absolutely definitive) conclusions could be drawn ... even some helpful information on trends (as you pointed out in another post, some experts views may be improving over time).

 
Clearly there are a number of issues with the validity of pre-season predictions and year-end results - no argument there. Chase - are you saying (without actually saying it) that FBG will not make the previous years predictions and year end results available on an easy-to-download spreadsheet? Just askin'.ThanksBink
I have no idea what FBG will do. I don't believe FBG is adverse to doing it, but obviously for the staffers this is a pretty busy time right now for that type of project.I'm merely stating my opinion that the data are worthless if you don't know how to interpret them. There have been threads here where one projector was ranked as the best at ranking RBs, and then another person uses his system and that same projector ranked 2nd to last. That's not good, and why I think it would cause more harm than good to look too hard at one year's worth of data. I'm not even sure if 10 years worth of data is right, even if you have the correct system. Since we're going to have one year of data and no correct system, it seems like a waste of time. I want FBG to be accountable, but I don't see how you can compare FBG staff rankers in a meaningful and accurate way.It's not dissimilar from the threads that pop up about how half of the top 10 RBs get displaced every year. That information would be bad for you if it caused you to arbitrarily drop players out of your top 10 so that your rankings fit with past practice.
Agreed - any and all info should be applied at the user's risk. How do we put in a formal request for this? I think it would be good for draft prep - and I can't imagine it would be a ton of work. I mean we are looking for static, existing info - no modeling, etc. at this point. I have last year's Aug. 31 rankings but not the detailed prediction information so the bigger sample would be fun to play with. I have always appreciated FBG's responsiveness to member requests, though I have no way of knowing for sure, I believe I may have helped to prompt FBGs to start the Roundtable series with a couple of notes here to Joe - that developed into exactly what I was hoping, it is still one of the better in season features.
 
I can't imagine this will take any time at all to add the actual results to last year's predictions.

Last years predictions exist already.

Last years actual results exist already.

Seems like it's not a very time consuming activity at all to put them in one simple table.

Why the resistance to publishing the side by side results? As many have noted, it would be a great opportunity for some FBG staffers to show how good their predictions are.

Dodds? You out there?

 

Users who are viewing this thread

Back
Top