What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Yearly Assessment of Staff Rankings (1 Viewer)

Would you like to see staff rankings assessed yearly?

  • Yes

    Votes: 0 0.0%
  • No

    Votes: 0 0.0%

  • Total voters
    0

Musesboy

Footballguy
There has been a recent thread and plenty of discussion in the past as to whether we should measure the success of FBGs rankings. I thought it would be interesting to see what the consensus view is.

My own view is that accountability and assessment is vital. In any job, improvements are made by reviewing your methods and their success, so that strengths and weaknesses can be found.

Do you make your own projections and then forget about them, or do you look back to see how you did?

Is it important to review success, or a pointless waste of time once the season has ended?

What do you think?

If we do this, the evaluation system would need to be carefully constructed. Perhaps something based on the accuracy of the projected points for each player?

 
Although we'd all love to see who is "best", a ranking of this type is difficult to perform at all, and virtually impossible to do fairly. Injuries alone can skew the rankings unfairly. Outlier rankings can have a huge positive or negative effect, making one guy's rankings appear less (or more) accurate then they really were.

I'm voting "yes" here..but would caution that any rankings would have to be taken with a HUGE grain of salt.

 
I'm pretty sure we already do this internally. But I do not believe that the results have ever been fully circulated. Doug has tools to monitor everything these days . . .

 
Although we'd all love to see who is "best", a ranking of this type is difficult to perform at all, and virtually impossible to do fairly. Injuries alone can skew the rankings unfairly. Outlier rankings can have a huge positive or negative effect, making one guy's rankings appear less (or more) accurate then they really were.I'm voting "yes" here..but would caution that any rankings would have to be taken with a HUGE grain of salt.
I was thinking of taking the projected fantasy points for each player, and measuring the percentage accuracy of each.Something like this:1. Allow 25 ranking points (or whatever number you decide) for each player ranked.2. Assess the rankings at the end of the year to see the percentage accuracy for each player by each staff member - if you project Tomlinson at 300 points, figure out the percentage difference under or over (it doesn't matter).3. For each percentage point of error, subtract 1 point from the 25 possible points - and do this for every player.4. Injuries will occur, but everyone will be affected by them. So what if a few players don't score you any ranking points?5. Add the total ranking points for each staff member.Yes, I know you can find flaws in all this. Maybe tweak it a little. But we currently have no measure at all, so surely this would be better than that?
 
I'm pretty sure we already do this internally. But I do not believe that the results have ever been fully circulated. Doug has tools to monitor everything these days . . .
It would be great to see the results of that.For instance, how does that feedback impact your own projections in future years?
 
I'm pretty sure we already do this internally. But I do not believe that the results have ever been fully circulated. Doug has tools to monitor everything these days . . .
It would be great to see the results of that.For instance, how does that feedback impact your own projections in future years?
For me I don't think it made much difference. And I also don't think if someone did extremely well one year that that automatically translates to doing well the following year. I could be wrong on that, but that's my perception.For example, one year I think at the last point I checked the spreadsheet for how accurate our rankings were (very late in the year), I was ranked #1 in staff rankings for RBs and QBs, #3 for TEs, but below average for WRs. IIRC. the next year I was Top 3 in WRs but below average in RBs.I also am not a huge fan of rankings and much prefer tiering, as I would consider having a bucket of 8 guys that score with 5 or 6 points of each other a better representation of accuracy than getting those same players in the wrong order rankings wise and then getting penalized for it when they were all almost the same anyway.Ultimately I think that having full projections for all teams and players is a better comparison, but generating and updating those on a regular basis is a lot of work (and not many staffers compile and/or post them).
 
Although we'd all love to see who is "best", a ranking of this type is difficult to perform at all, and virtually impossible to do fairly. Injuries alone can skew the rankings unfairly. Outlier rankings can have a huge positive or negative effect, making one guy's rankings appear less (or more) accurate then they really were.

I'm voting "yes" here..but would caution that any rankings would have to be taken with a HUGE grain of salt.
I was thinking of taking the projected fantasy points for each player, and measuring the percentage accuracy of each.Something like this:

1. Allow 25 ranking points (or whatever number you decide) for each player ranked.

2. Assess the rankings at the end of the year to see the percentage accuracy for each player by each staff member - if you project Tomlinson at 300 points, figure out the percentage difference under or over (it doesn't matter).

3. For each percentage point of error, subtract 1 point from the 25 possible points - and do this for every player.

4. Injuries will occur, but everyone will be affected by them. So what if a few players don't score you any ranking points?

5. Add the total ranking points for each staff member.

Yes, I know you can find flaws in all this. Maybe tweak it a little. But we currently have no measure at all, so surely this would be better than that?
If I project Ryan Fitzpatrick to score 4 points and he actually scores 1, I was pretty much balls-on dead accurate.If I project Tony Romo to score 380 points and he actually scores 220, I was way off.

But the percentage method would grade my Romo projection as being much better than my Fitzpatrick projection. I don't think that's useful.

It's not a trivial task to come up with a method of evaluation that is useful.

 
4. Injuries will occur, but everyone will be affected by them. So what if a few players don't score you any ranking points?
This is the major issue though. Someone may get "credit" for a player that they severely underrated on a points per game basis. It's not the same for everyone.
 
I think that individual success in projections in year N is not a predictor for success in year N+1. I doubt that the study would reveal much more than who did the best last year. I am much more a follower of the discussion of the individual players than the projections anyway.

I vote not worth the effort.

 
would like to see this but I bet Colin finished dead last due to him having Benson ranked at #4.
Although I didn't agree with that ranking I can appreciate guys going against the grain and willing to stand by/debate their predictions.
:thumbup: The debate of the projections is much more important than the predictions themselves.
Agreed but I got no time for blantant homerism in rankings we pay for eitherLOL
 
Not only would I like to see post-season ranking reviews, I'd also like to see post-draft rookie re-rankings because rookies' values hinge on which team and situation they are drafted into. The only one to do this to a small degree has been Bloom with his top 100.

 
I'm new around here so I don't have a great feel for how everything works yet. My guess, however, is that if you start to grade staff rankings you'll see less and less individual thinking and less staff going with "gut" feelings or hunches in their rankings. I prefer the "off the wall" suggestions in the rankings since sometimes "hunches" and "gut feelings" on a player are what make or break a FF championship season. Assessing the staff rankings will only promote "group think" for fear of failure - IMO.

 
I'm new around here so I don't have a great feel for how everything works yet. My guess, however, is that if you start to grade staff rankings you'll see less and less individual thinking and less staff going with "gut" feelings or hunches in their rankings. I prefer the "off the wall" suggestions in the rankings since sometimes "hunches" and "gut feelings" on a player are what make or break a FF championship season. Assessing the staff rankings will only promote "group think" for fear of failure - IMO.
actually I dont think it will. These guys are all big boys
 
I'm pretty sure we already do this internally. But I do not believe that the results have ever been fully circulated. Doug has tools to monitor everything these days . . .
It would be great to see the results of that.For instance, how does that feedback impact your own projections in future years?
For me I don't think it made much difference. And I also don't think if someone did extremely well one year that that automatically translates to doing well the following year. I could be wrong on that, but that's my perception.For example, one year I think at the last point I checked the spreadsheet for how accurate our rankings were (very late in the year), I was ranked #1 in staff rankings for RBs and QBs, #3 for TEs, but below average for WRs. IIRC. the next year I was Top 3 in WRs but below average in RBs.I also am not a huge fan of rankings and much prefer tiering, as I would consider having a bucket of 8 guys that score with 5 or 6 points of each other a better representation of accuracy than getting those same players in the wrong order rankings wise and then getting penalized for it when they were all almost the same anyway.Ultimately I think that having full projections for all teams and players is a better comparison, but generating and updating those on a regular basis is a lot of work (and not many staffers compile and/or post them).
I use a tiering system too, but I still like to know whether my tiers were accurate.This is not an evaluation of FF strategy, just the accuracy of the predictions.If the accuracy is not of any importance, why bother having them at all?The problem with tiering players is that you need some kind of comparison between players at a different position; that's where the projections may have some value perhaps?
 
If I project Ryan Fitzpatrick to score 4 points and he actually scores 1, I was pretty much balls-on dead accurate.

If I project Tony Romo to score 380 points and he actually scores 220, I was way off.

But the percentage method would grade my Romo projection as being much better than my Fitzpatrick projection. I don't think that's useful.

It's not a trivial task to come up with a method of evaluation that is useful.
The percentage method would give you zero points in both cases because you would only get points if you were within 25%.The 25% is just a random figure that I threw out there of course. There is probably a better way. I take your point.

 
4. Injuries will occur, but everyone will be affected by them. So what if a few players don't score you any ranking points?
This is the major issue though. Someone may get "credit" for a player that they severely underrated on a points per game basis. It's not the same for everyone.
Yes that's true. Especially if the player ends up starting most games. If he starts 3 or 4, few people would get anywhere near his total.PPG is a useful measure, but injuries are part of the game. Some players have a chronic injury history or known issues, and that should be taken into account in any projection.However, if a player starts two games and gets 20 points per game, it doesn't follow that 320 points for the year was a good or useful projection, and it should not be rewarded.
 
I think that individual success in projections in year N is not a predictor for success in year N+1. I doubt that the study would reveal much more than who did the best last year. I am much more a follower of the discussion of the individual players than the projections anyway.I vote not worth the effort.
Fair enough.That's what I was asking really. Do we care about this or not?
 
Perhaps there should be another poll:

Do you think having the balls to evaluate your projections increases or decreases the site's credibility?

It would raise its image in my eyes, but I accept that some might feel differently.

If I were a staff member I would certainly be interested to see where I ranked among the other industry experts.

 
Perhaps there should be another poll:Do you think having the balls to evaluate your projections increases or decreases the site's credibility?It would raise its image in my eyes, but I accept that some might feel differently.If I were a staff member I would certainly be interested to see where I ranked among the other industry experts.
Among the newbies...it would almost certainly lower it. People don't appreciate how difficult it is until they've played a few years.
 
This is what I would like the data to tell me:

When a staff member disagreed with the herd, how did that player do?

 
For example, one year I think at the last point I checked the spreadsheet for how accurate our rankings were (very late in the year), I was ranked #1 in staff rankings for RBs and QBs, #3 for TEs, but below average for WRs. IIRC. the next year I was Top 3 in WRs but below average in RBs.I also am not a huge fan of rankings and much prefer tiering.......
:lmao: This perfectly illustrates part of my earlier point. If you take these rankings too seriously, then it unfairly diminishes the opinion/viewpoint of any specific staffer (or inflates it). I'm not really sure it matters a lot which staffer is most accurate in any given year (I suspect the title would rotate around a lot). The discussion spurred by the rankings is what is most important.
 
Not a fan of this. While it could make for interesting discussion, I'm sure it will very quickly devolve into a "let's hold their feet to the fire because we're paying for this" attitude. I'd rather not see that in my hobby. If it were hardcore gambling, maybe the year-end record counts more. But to me, this is all for fun, even if entry fees are involved. Just my .02.

Voted "no".

 
Last edited by a moderator:
Even if the FBG prognosticators don't agree to do this, there's nothing stopping the OP or anyone else from doing it themselves.

 
Not a fan of this. While it could make for interesting discussion, I'm sure it will very quickly devolve into a "let's hold their feet to the fire because we're paying for this" attitude. I'd rather not see that in my hobby. If it were hardcore gambling, maybe the year-end record counts more. But to me, this is all for fun, even if entry fees are involved. Just my .02.Voted "no".
I agree with you jwb.But i'd like to see it for the fun of it and it will allow us to weight each person's projections with a higher degree of accuracy.Just another tool.Is there a way to get 2007 ranking and projections?
 
This is what I would like the data to tell me:When a staff member disagreed with the herd, how did that player do?
Great point. That's ultimately what is relevant to most people. On a slightly different tact, i'd be interested in seeing how far each FBG's rankings overall differ from the mean. Just gives you a sense of how independent each guy is, and IF some reasonable way of comparing results were possible that would be a great metric to compare to. I think what most sharks wonder is how well it pays to think independently vs accept the conventional wisdom. The answer, of course, is 'it depends', but its still interesting.
 
I'm pretty sure we already do this internally. But I do not believe that the results have ever been fully circulated. Doug has tools to monitor everything these days . . .
It would be great to see the results of that.For instance, how does that feedback impact your own projections in future years?
For me I don't think it made much difference. And I also don't think if someone did extremely well one year that that automatically translates to doing well the following year. I could be wrong on that, but that's my perception.For example, one year I think at the last point I checked the spreadsheet for how accurate our rankings were (very late in the year), I was ranked #1 in staff rankings for RBs and QBs, #3 for TEs, but below average for WRs. IIRC. the next year I was Top 3 in WRs but below average in RBs.

I also am not a huge fan of rankings and much prefer tiering, as I would consider having a bucket of 8 guys that score with 5 or 6 points of each other a better representation of accuracy than getting those same players in the wrong order rankings wise and then getting penalized for it when they were all almost the same anyway.

Ultimately I think that having full projections for all teams and players is a better comparison, but generating and updating those on a regular basis is a lot of work (and not many staffers compile and/or post them).
:rolleyes: There are too many variables in trying to rank players. When I do view the rankings I don't see where they are ranked but who they are closely ranked along side. Maybe even have projections based on whether a RB will reach 1250/ 1400/1550 or 4 TDS/ 8 TDS / 12 TDS etc, might even be more valuable than a 1 - 25. :lmao:

 
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :whoosh:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.

 
Last edited by a moderator:
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :construction:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
 
Last edited by a moderator:
This is what I would like the data to tell me:

When a staff member disagreed with the herd, how did that player do?
Excellent suggestion! This is exactly what I was thinking. If the staff have SJax ranked 1,3,3,3,4,2 and 3 and Jackson snaps his knee in week2, should a ranking rating really reflect (wow, that's a lot of r's) the fact that the 4th overall staffer was however many more % points more correct than the staffer having him 1st overall? Now if they have Andre Johnson 4th, 7th, 8th, 2nd, 3rd, 2nd and 16th. The 16th overall ranking is what I'd like to see singled out and tracked. Maybe a % deviation from the cumulative ranking "flags" a player to be tracked. Identifying who is generally more successful when they go out on a limb would be much more interesting (and useful).

In the end though, I think some of the negative impacts pointed out in this thread will keep this from happening. People who are reading FBG and posting in the SP here in May could take the ratings in stride but I can already see the "Why on earth are you even employing so-and-so" threads come November.

 
This is what I would like the data to tell me:

When a staff member disagreed with the herd, how did that player do?
Great point. That's ultimately what is relevant to most people. On a slightly different tact, i'd be interested in seeing how far each FBG's rankings overall differ from the mean. Just gives you a sense of how independent each guy is, and IF some reasonable way of comparing results were possible that would be a great metric to compare to. I think what most sharks wonder is how well it pays to think independently vs accept the conventional wisdom. The answer, of course, is 'it depends', but its still interesting.
As one of the long-time staffers and rankers around here, I'll chime in on the above as I often have some of the most "maverick" opinions. I often get taken to task on this board for this (also I have to defend my views internally with the other FBG's staff, and particularly to David Dodds, who are all demanding and competitive individuals as well). It is definitely a lonely place to be when your analysis breaks down in the face of a season, as happened to me in 2006 when I had Steven Jackson MUCH lower than anyone else pre-season and he went on to post a stellar (#3 overall RB) season, despite my concerns about his thin and aging OL and the question of whether or not he could stand up to a full 16 game season of featured-back pounding. Obviously, he did stand up to it and thrived that year, despite losing Orlando Pace during the final 6 games of the season - during 2006.

Entering 2007, I had the same concerns about the OL, but since Jackson had so decisively proven his worth during '06, I put him in the top 5 RBs in my rankings as he is the sort of back who can make a under-par to par OL look good when he's on the field. Then, in 2007, my concern over the OL is proved justified when Pace goes down week 1, Bulger gets spammed by rushers about a kazillion times, and Jackson struggles to score (only 5 rushing TDs as a team last year) - and struggles to stay healthy - resulting in a disappointing 14th place finish for Jackson.

So, here we are at the beginning of 2008, and I'm struggling along (again) as the lone voice urging caution on Jackson this year (see my full Blog post for more on my struggle with projecting Jackson in particular for 2008).

Now, what sort of yearly assessment could be devised that would take into account the driving logic behind a particular staffer's rankings? What if at the end of the season 2 years running, a particular staffer turned out to be significantly inaccurate for any given player - as I have been on Steven Jackson both seasons? Put another way, is it useful to know why I have missed on Steven Jackson 2 years running, as explained in this post and my Blog discussion, or would people simply tend to discount my rankings as a set simply due to the bald fact I missed on Jackson both 2006 and 2007?

I don't know the answer to the above questions, I am just posing them and setting my rankings/rationale of/on Jackson up as an example of the difficulty of putting a statistical/numerical "grade" on any given observer's rankings on a player-by-player basis and then passing a verdict on the whole set based on the individual player-members of a one-year sample.

 
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :lmao:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
WOW thanks for the link MT, I jsut went back and looked at the value plays listing. We are dealing with an in-exact science here
 
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :bag:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
Excellent example, MT - Take a look at 2007 Overvalued QB's as a subset of the whole article series, with my 3 calls in this article highlighted in particular.I made 2 correct calls on this page:

Vince Young (finished 2007 #19 QB among fantasy QBs 07):

Mark Wimer - QB7? Who is Young going to throw to this year? WR Drew Bennett is gone, WR David Givens' left knee is a wreck, and TE Ben Troupe is coming off a broken ankle (November, 2006) that sidelined him just as Young was developing. Brandon Jones, Courtney Roby, etc. and the host of rookie WRs don't inspire a lot of confidence. The Titan's chemistry is lacking, and their WR cupboard is essentially bare. Young is #13 on my QB board as of mid-June.
Philip Rivers (finished 2007 #15 fantasy QB):

Mark Wimer - Philip Rivers made the NFL Pro Bowl last season, but he wasn't a top-5 fantasy QB. With 284/460 for 3388 yards, 22 TDs and 9 interceptions, he landed at #9 among fantasy signal callers as of year's end. Now, there's a new coaching staff in town, and a host of young and/or lackluster WRs that will be asked to snag Rivers' passes this year. Vincent Jackson, with 2 years NFL experience and a career best 27/453/6 last year figures to be the #1 WR; #2 belongs to 5-year veteran Eric Parker, whose best season came in 2005 with 57/725/3. (Parker did not catch a TD last season (48/659/0)). Antonio Gates is an all-world TE, but he can't generate a top-10 passing offense single-handed. I look for Rivers to regress during 2007, finishing out of the top 10 (I have him at #15 as of mid-June).
and I blew one call on this pageTony Romo (finished 2007 #2 fantasy QB):

Mark Wimer - Tony Romo ran hot and cold last season after taking over the helm in week 7. He looked like a world-beater at times (22/29 for 306 yards, 5 TDs and 0 interceptions week 12 vs. Tampa Bay) but sometimes looked like a first-year starter (14/29 for 142 yards, 1 TD and 2 interceptions vs. Philly week 16). This year, he'll be working with a new head coach (Wade Phillips), QB coach (Wade Wilson) and offensive coordinator (Jason Garrett) -- there will be a lot to absorb as the new regime settles into the saddle. It would be no surprise to see Romo come out of the gates cold during 2007, which will limit his fantasy upside. I think he'll significantly under-perform his mid-June ADP as the 9th QB drafted. I currently have him at #18 on my QB board.
So, given the above (very small) sample, I correctly identified 2 out of 3 fantasy QBs as over-valued (in mid-June), but the deviation from the miss on Romo (#18 to #2) would tend to make the set of 3 (in a statistical, numerical analysis based on this standard) look not nearly as impressive as saying "Wimer correctly identified 2 out of 3 (66%) of the over-valued QBs he flagged during 2007". So which is the better measure - a standard based on variance (where Tony Romo's success wrecks the set of 3 for me personally) or is the simpler, 2 out of 3 a more correct measure? Or is neither helpful without the analysis that accompanied this snippet of our yearly offerings here at FBGs?...

 
Last edited by a moderator:
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :goodposting:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
Well, it's one thing to list the value plays. But this thread is about going back and ASSESSING those value plays. Also, it's about each staff member having their individual players that are ranked a significant difference from other staff members (not just value plays made against the general FF population). Once they give their reasoning why they are different from other staffers, then you can go back and assess those reasonings and results at the end of the year.
 
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :2cents:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
Excellent example, MT - Take a look at 2007 Overvalued QB's as a subset of the whole article series, with my 3 calls in this article highlighted in particular.I made 2 correct calls on this page:

Vince Young (finished 2007 #19 QB among fantasy QBs 07):

Mark Wimer - QB7? Who is Young going to throw to this year? WR Drew Bennett is gone, WR David Givens' left knee is a wreck, and TE Ben Troupe is coming off a broken ankle (November, 2006) that sidelined him just as Young was developing. Brandon Jones, Courtney Roby, etc. and the host of rookie WRs don't inspire a lot of confidence. The Titan's chemistry is lacking, and their WR cupboard is essentially bare. Young is #13 on my QB board as of mid-June.
Philip Rivers (finished 2007 #15 fantasy QB):

Mark Wimer - Philip Rivers made the NFL Pro Bowl last season, but he wasn't a top-5 fantasy QB. With 284/460 for 3388 yards, 22 TDs and 9 interceptions, he landed at #9 among fantasy signal callers as of year's end. Now, there's a new coaching staff in town, and a host of young and/or lackluster WRs that will be asked to snag Rivers' passes this year. Vincent Jackson, with 2 years NFL experience and a career best 27/453/6 last year figures to be the #1 WR; #2 belongs to 5-year veteran Eric Parker, whose best season came in 2005 with 57/725/3. (Parker did not catch a TD last season (48/659/0)). Antonio Gates is an all-world TE, but he can't generate a top-10 passing offense single-handed. I look for Rivers to regress during 2007, finishing out of the top 10 (I have him at #15 as of mid-June).
and I blew one call on this pageTony Romo (finished 2007 #2 fantasy QB):

Mark Wimer - Tony Romo ran hot and cold last season after taking over the helm in week 7. He looked like a world-beater at times (22/29 for 306 yards, 5 TDs and 0 interceptions week 12 vs. Tampa Bay) but sometimes looked like a first-year starter (14/29 for 142 yards, 1 TD and 2 interceptions vs. Philly week 16). This year, he'll be working with a new head coach (Wade Phillips), QB coach (Wade Wilson) and offensive coordinator (Jason Garrett) -- there will be a lot to absorb as the new regime settles into the saddle. It would be no surprise to see Romo come out of the gates cold during 2007, which will limit his fantasy upside. I think he'll significantly under-perform his mid-June ADP as the 9th QB drafted. I currently have him at #18 on my QB board.
So, given the above (very small) sample, I correctly identified 2 out of 3 fantasy QBs as over-valued (in mid-June), but the deviation from the miss on Romo (#18 to #2) would tend to make the set of 3 (in a statistical, numerical analysis based on this standard) look not nearly as impressive as saying "Wimer correctly identified 2 out of 3 (66%) of the over-valued QBs he flagged during 2007". So which is the better measure - a standard based on variance (where Tony Romo's success wrecks the set of 3 for me personally) or is the simpler, 2 out of 3 a more correct measure? Or is neither helpful without the analysis that accompanied this snippet of our yearly offerings here at FBGs?...
Exactly. I think going back and assessing these value plays is probably more helpful than trying to do it for an entire set of rankings. Less work overall and more useful as I don't care if you ranked J. Lewis 11th and he finished 8th. I do care if you had Portis listed 6th when everyone else had him 20th and he ended up finishing 6th. That tells me you knew something and I'd like to know why you knew it. I'd also like to know why you have Evans ranked 11th when everyone else has him 23rd and he finishes 28th. If I see you dropping another WR for similar reasons this year, it may be something to step back and look at closer. I hope that makes sense.
 
I think that individual success in projections in year N is not a predictor for success in year N+1. I doubt that the study would reveal much more than who did the best last year. I am much more a follower of the discussion of the individual players than the projections anyway.I vote not worth the effort.
Fair enough.That's what I was asking really. Do we care about this or not?
I know it would be more difficult to put together since it will take longer but what I feel would very relevant are the projection success's over a course of years. The consistency factor and your ability over time to accurately predict is more insightful than hitting on a few players one year.
 
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :unsure:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
Excellent example, MT - Take a look at 2007 Overvalued QB's as a subset of the whole article series, with my 3 calls in this article highlighted in particular.I made 2 correct calls on this page:

Vince Young (finished 2007 #19 QB among fantasy QBs 07):

Mark Wimer - QB7? Who is Young going to throw to this year? WR Drew Bennett is gone, WR David Givens' left knee is a wreck, and TE Ben Troupe is coming off a broken ankle (November, 2006) that sidelined him just as Young was developing. Brandon Jones, Courtney Roby, etc. and the host of rookie WRs don't inspire a lot of confidence. The Titan's chemistry is lacking, and their WR cupboard is essentially bare. Young is #13 on my QB board as of mid-June.
Philip Rivers (finished 2007 #15 fantasy QB):

Mark Wimer - Philip Rivers made the NFL Pro Bowl last season, but he wasn't a top-5 fantasy QB. With 284/460 for 3388 yards, 22 TDs and 9 interceptions, he landed at #9 among fantasy signal callers as of year's end. Now, there's a new coaching staff in town, and a host of young and/or lackluster WRs that will be asked to snag Rivers' passes this year. Vincent Jackson, with 2 years NFL experience and a career best 27/453/6 last year figures to be the #1 WR; #2 belongs to 5-year veteran Eric Parker, whose best season came in 2005 with 57/725/3. (Parker did not catch a TD last season (48/659/0)). Antonio Gates is an all-world TE, but he can't generate a top-10 passing offense single-handed. I look for Rivers to regress during 2007, finishing out of the top 10 (I have him at #15 as of mid-June).
and I blew one call on this pageTony Romo (finished 2007 #2 fantasy QB):

Mark Wimer - Tony Romo ran hot and cold last season after taking over the helm in week 7. He looked like a world-beater at times (22/29 for 306 yards, 5 TDs and 0 interceptions week 12 vs. Tampa Bay) but sometimes looked like a first-year starter (14/29 for 142 yards, 1 TD and 2 interceptions vs. Philly week 16). This year, he'll be working with a new head coach (Wade Phillips), QB coach (Wade Wilson) and offensive coordinator (Jason Garrett) -- there will be a lot to absorb as the new regime settles into the saddle. It would be no surprise to see Romo come out of the gates cold during 2007, which will limit his fantasy upside. I think he'll significantly under-perform his mid-June ADP as the 9th QB drafted. I currently have him at #18 on my QB board.
So, given the above (very small) sample, I correctly identified 2 out of 3 fantasy QBs as over-valued (in mid-June), but the deviation from the miss on Romo (#18 to #2) would tend to make the set of 3 (in a statistical, numerical analysis based on this standard) look not nearly as impressive as saying "Wimer correctly identified 2 out of 3 (66%) of the over-valued QBs he flagged during 2007". So which is the better measure - a standard based on variance (where Tony Romo's success wrecks the set of 3 for me personally) or is the simpler, 2 out of 3 a more correct measure? Or is neither helpful without the analysis that accompanied this snippet of our yearly offerings here at FBGs?...
Exactly. I think going back and assessing these value plays is probably more helpful than trying to do it for an entire set of rankings. Less work overall and more useful as I don't care if you ranked J. Lewis 11th and he finished 8th. I do care if you had Portis listed 6th when everyone else had him 20th and he ended up finishing 6th. That tells me you knew something and I'd like to know why you knew it. I'd also like to know why you have Evans ranked 11th when everyone else has him 23rd and he finishes 28th. If I see you dropping another WR for similar reasons this year, it may be something to step back and look at closer. I hope that makes sense.
I think I'm following you - you're interested in opinions other than the norm (like ADP) and whether or not they panned out for a particular observer last season (or in 06, or the past 3 years, or whatever yardstick you wish to apply). I think that assesing our outliers is a good idea. That's why I love Doug Drinen's latest refinement of our staff rankings interface, with the staff able to offer their rationale for their outlier rankings. Mouse over the "blue" numbers for the texts that accompany the rankings. Some of my Steven Jackson discussion is to be found there, now, as well as in this thread and in my blog post.

 
Mark's example is exactly why this is a really hard, and probably unfair proposition. Moreover, FF isnt a game decided by who's projections are the closest 1 through 500. Whatever formula you come up with doesnt necessarilly reflect fantasy success, at best it would reflect success in whatever formula has been arbitrarilly devised.

I'd compare it to have 10 great poker players explain how they value a certain starting hand. There is surely a mathmatically 'certain' answer, but that doesnt account for the psychology and flow of the game, which is what makes poker sharks deadly and mathmaticians not.

At the end of the day, which FBG is more valuabe, the one who thought Steven Jackson was overvalued in 08, or the one that told you to keep your eye on Earnest Graham? It completely depends on context. Ultimately, do i really care if somebodys top 10 was a few percent off the pack... but he suggested a sleeper that won my season (and may well have won his)?

 
Last edited by a moderator:
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :shrug:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
Excellent example, MT - Take a look at 2007 Overvalued QB's as a subset of the whole article series, with my 3 calls in this article highlighted in particular.I made 2 correct calls on this page:

Vince Young (finished 2007 #19 QB among fantasy QBs 07):

Mark Wimer - QB7? Who is Young going to throw to this year? WR Drew Bennett is gone, WR David Givens' left knee is a wreck, and TE Ben Troupe is coming off a broken ankle (November, 2006) that sidelined him just as Young was developing. Brandon Jones, Courtney Roby, etc. and the host of rookie WRs don't inspire a lot of confidence. The Titan's chemistry is lacking, and their WR cupboard is essentially bare. Young is #13 on my QB board as of mid-June.
Philip Rivers (finished 2007 #15 fantasy QB):

Mark Wimer - Philip Rivers made the NFL Pro Bowl last season, but he wasn't a top-5 fantasy QB. With 284/460 for 3388 yards, 22 TDs and 9 interceptions, he landed at #9 among fantasy signal callers as of year's end. Now, there's a new coaching staff in town, and a host of young and/or lackluster WRs that will be asked to snag Rivers' passes this year. Vincent Jackson, with 2 years NFL experience and a career best 27/453/6 last year figures to be the #1 WR; #2 belongs to 5-year veteran Eric Parker, whose best season came in 2005 with 57/725/3. (Parker did not catch a TD last season (48/659/0)). Antonio Gates is an all-world TE, but he can't generate a top-10 passing offense single-handed. I look for Rivers to regress during 2007, finishing out of the top 10 (I have him at #15 as of mid-June).
and I blew one call on this pageTony Romo (finished 2007 #2 fantasy QB):

Mark Wimer - Tony Romo ran hot and cold last season after taking over the helm in week 7. He looked like a world-beater at times (22/29 for 306 yards, 5 TDs and 0 interceptions week 12 vs. Tampa Bay) but sometimes looked like a first-year starter (14/29 for 142 yards, 1 TD and 2 interceptions vs. Philly week 16). This year, he'll be working with a new head coach (Wade Phillips), QB coach (Wade Wilson) and offensive coordinator (Jason Garrett) -- there will be a lot to absorb as the new regime settles into the saddle. It would be no surprise to see Romo come out of the gates cold during 2007, which will limit his fantasy upside. I think he'll significantly under-perform his mid-June ADP as the 9th QB drafted. I currently have him at #18 on my QB board.
So, given the above (very small) sample, I correctly identified 2 out of 3 fantasy QBs as over-valued (in mid-June), but the deviation from the miss on Romo (#18 to #2) would tend to make the set of 3 (in a statistical, numerical analysis based on this standard) look not nearly as impressive as saying "Wimer correctly identified 2 out of 3 (66%) of the over-valued QBs he flagged during 2007". So which is the better measure - a standard based on variance (where Tony Romo's success wrecks the set of 3 for me personally) or is the simpler, 2 out of 3 a more correct measure? Or is neither helpful without the analysis that accompanied this snippet of our yearly offerings here at FBGs?...
Exactly. I think going back and assessing these value plays is probably more helpful than trying to do it for an entire set of rankings. Less work overall and more useful as I don't care if you ranked J. Lewis 11th and he finished 8th. I do care if you had Portis listed 6th when everyone else had him 20th and he ended up finishing 6th. That tells me you knew something and I'd like to know why you knew it. I'd also like to know why you have Evans ranked 11th when everyone else has him 23rd and he finishes 28th. If I see you dropping another WR for similar reasons this year, it may be something to step back and look at closer. I hope that makes sense.
I think I'm following you - you're interested in opinions other than the norm (like ADP) and whether or not they panned out for a particular observer last season (or in 06, or the past 3 years, or whatever yardstick you wish to apply). I think that assesing our outliers is a good idea. That's why I love Doug Drinen's latest refinement of our staff rankings interface, with the staff able to offer their rationale for their outlier rankings. Mouse over the "blue" numbers for the texts that accompany the rankings. Some of my Steven Jackson discussion is to be found there, now, as well as in this thread and in my blog post.
Exactly. And with those comments, that shouldn't be too difficult to do. I don't know if there's an objective formula to be used, but even if it's just a page identifying the players that were outliers for each staffer, their rank, their final rank, and their associated comments, and then we can just see for ourselves and decide what's significant and what's not. But it's a way of getting better overall. And, for instance, if a certain staffer was pimping a WR that no one else was and then I see him doing it the next year, I'm bound to pay a little more attention to that.
 
would like to see this but I bet Colin finished dead last due to him having Benson ranked at #4.
Although I didn't agree with that ranking I can appreciate guys going against the grain and willing to stand by/debate their predictions.
:confused: The debate of the projections is much more important than the predictions themselves.
Agreed but I got no time for blantant homerism in rankings we pay for eitherLOL
The debate and projections are the most important to me. They help me see the tiering and reasoning behind the ranking.Interesting that so many 2007 rankings predicted the decline of the Bears Oline and the effect it would have on Benson, considering it was the first time the rock was all his and the performance he had in college that seemed like it would translate to the NFL well.Finding value in dynamic situations sometimes comes from people who keep track of their team closely.
 
I also am not a huge fan of rankings and much prefer tiering, as I would consider having a bucket of 8 guys that score with 5 or 6 points of each other a better representation of accuracy than getting those same players in the wrong order rankings wise and then getting penalized for it when they were all almost the same anyway.Ultimately I think that having full projections for all teams and players is a better comparison, but generating and updating those on a regular basis is a lot of work (and not many staffers compile and/or post them).
Curious as to why the emphasis here is on projections/ rankings then, rather than tiering?
 
I think one way of potentially doing this for accountability to see how they are doing is this:

Multiple FBG staff submit their rankings and then we are able to get a composite ranking. This composite ranking could be used like "ADP" to determine an average. In fact, what I'm about to propose could be done both vs. the composite rankings AND and also vs. ADP at a major site like MFL.

Then, every player that a staffer ranks at least "x" spots away from that composite or ADP rankings gets looked at. Thus, one staffer may have a total of 9 players that were ranked significantly different than the composite and/or ADP. We then assign a grade on each of those players. This grade could be weighted in terms of how high the player is ranked (i.e. more weight for guys ranked 1-30 than guys ranked 30-60 than guys ranked 60-90, etc.). Then, each staffer has a "report card" detailing each player, where their composite rank was, where their rank was, how well it turned out (good prediction, bad prediction, didn't matter), and the reason why they have that player so far off. They can submit their opinions on these few players at the beginning of the season and then those reasons can be looked at after the season is over.

I would like something like this because being off by just a few spots one way or another isn't devastating. But when a guy like Portis has a composite ranking of 18 and one staffer has him at 8, then I'd like to know why. Then, at the end of the year, when Portis finishes at 8, we can see that they were indeed right and if their reasons were justified. I also don't think it would require a lot of extra work/effort from staffers as there really should only be a few players per staffer that would be significantly different than the composite. I don't know what a good "x" number of spots difference would be, but I think one could arbitrarily be picked. Of note, the composite that each staffer would be compared to should exclude their rankings as part of the composite.

Just my :thumbup:

ETA--I think this report card would be good in detailing what things certain staffers excel at (i.e., paying attention to coaching changes, looking at personnel changes, etc.) and what things they seem to overemphasize that don't pan out. Might be helpful for them as well as us.
It sounds like this is what you may want: Value Plays. That's from last June, with each staffer listing the players he was particularly high or low on (compared to the consensus), and why. (Unfortunately it doesn't list the player's consensus ranking at the time.)
Excellent example, MT - Take a look at 2007 Overvalued QB's as a subset of the whole article series, with my 3 calls in this article highlighted in particular.I made 2 correct calls on this page:

Vince Young (finished 2007 #19 QB among fantasy QBs 07):

Mark Wimer - QB7? Who is Young going to throw to this year? WR Drew Bennett is gone, WR David Givens' left knee is a wreck, and TE Ben Troupe is coming off a broken ankle (November, 2006) that sidelined him just as Young was developing. Brandon Jones, Courtney Roby, etc. and the host of rookie WRs don't inspire a lot of confidence. The Titan's chemistry is lacking, and their WR cupboard is essentially bare. Young is #13 on my QB board as of mid-June.
Philip Rivers (finished 2007 #15 fantasy QB):

Mark Wimer - Philip Rivers made the NFL Pro Bowl last season, but he wasn't a top-5 fantasy QB. With 284/460 for 3388 yards, 22 TDs and 9 interceptions, he landed at #9 among fantasy signal callers as of year's end. Now, there's a new coaching staff in town, and a host of young and/or lackluster WRs that will be asked to snag Rivers' passes this year. Vincent Jackson, with 2 years NFL experience and a career best 27/453/6 last year figures to be the #1 WR; #2 belongs to 5-year veteran Eric Parker, whose best season came in 2005 with 57/725/3. (Parker did not catch a TD last season (48/659/0)). Antonio Gates is an all-world TE, but he can't generate a top-10 passing offense single-handed. I look for Rivers to regress during 2007, finishing out of the top 10 (I have him at #15 as of mid-June).
and I blew one call on this pageTony Romo (finished 2007 #2 fantasy QB):

Mark Wimer - Tony Romo ran hot and cold last season after taking over the helm in week 7. He looked like a world-beater at times (22/29 for 306 yards, 5 TDs and 0 interceptions week 12 vs. Tampa Bay) but sometimes looked like a first-year starter (14/29 for 142 yards, 1 TD and 2 interceptions vs. Philly week 16). This year, he'll be working with a new head coach (Wade Phillips), QB coach (Wade Wilson) and offensive coordinator (Jason Garrett) -- there will be a lot to absorb as the new regime settles into the saddle. It would be no surprise to see Romo come out of the gates cold during 2007, which will limit his fantasy upside. I think he'll significantly under-perform his mid-June ADP as the 9th QB drafted. I currently have him at #18 on my QB board.
So, given the above (very small) sample, I correctly identified 2 out of 3 fantasy QBs as over-valued (in mid-June), but the deviation from the miss on Romo (#18 to #2) would tend to make the set of 3 (in a statistical, numerical analysis based on this standard) look not nearly as impressive as saying "Wimer correctly identified 2 out of 3 (66%) of the over-valued QBs he flagged during 2007". So which is the better measure - a standard based on variance (where Tony Romo's success wrecks the set of 3 for me personally) or is the simpler, 2 out of 3 a more correct measure? Or is neither helpful without the analysis that accompanied this snippet of our yearly offerings here at FBGs?...
Exactly. I think going back and assessing these value plays is probably more helpful than trying to do it for an entire set of rankings. Less work overall and more useful as I don't care if you ranked J. Lewis 11th and he finished 8th. I do care if you had Portis listed 6th when everyone else had him 20th and he ended up finishing 6th. That tells me you knew something and I'd like to know why you knew it. I'd also like to know why you have Evans ranked 11th when everyone else has him 23rd and he finishes 28th. If I see you dropping another WR for similar reasons this year, it may be something to step back and look at closer. I hope that makes sense.
I think this might also show something that has validity and that is that some of the staff are probably stronger on RB's than WR's or QB's etc...As we all know each one of us is stronger with our "scout" eye for certain positions and even our "statistical" eye may be stronger for one subset as well. The point is that we may find out the Wimer is the one to listen to on QB's and Dodds and Bryant are the ones to listen to about WR's and Tremblay is the RB guru?
 
I have been asking for this for years..........even asked Joe and the guys through a PM

I would like to see an application just like the current VBD app where after the season we are able to plug in our scoring system and see where all the players actually ended up....then we could compare that to our pre draft VBD cheatsheet...

see how good they really are...

plus a tool like this would allow us to prepare for a new league that we may be entering or taking over a team in......we could plug in the scoring system and see how guys did in that league from the previous year

 
Last edited by a moderator:
I'm pretty sure we already do this internally. But I do not believe that the results have ever been fully circulated. Doug has tools to monitor everything these days . . .
As a paying customer for at least 5 years, I'd like to be privy to it. My guess is that the winner actually may be different every year because you guys are all solid, but I'd still like to see it as a source of information. Over a 5 year span and beyond, something like that could be useful in terms of consistency.I mean, even a guy like Lhucks may get lucky 1 season, but over a 5 year span.......forget about it! :lmao:Edit to add: I know he's not staff, just taking a left jab to a fantasy football buddy.
 
Last edited by a moderator:

Users who are viewing this thread

Back
Top