What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Expert Rankings for Previous Years (2 Viewers)

rascal

Footballguy
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?

I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.

Thanks

 
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
All prior years are deleted.
 
There has to be some record of it. As each expert submits their rankings the system is updated, but there has to be a record of it. For example the top 200 list for each year usually given out the end of august is determined by the average expert ranking. So there probably exists a record of what those rankings were at the time of the top 200 list.

 
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
 
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
:thumbup: Yes, especially if you're just looking at a couple rankings, there'd be no way to separate random variation from meaningful prediction.
 
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
I think you'd only need 16 to have a statistically significant dataset :goodposting: I think it's entirely possible that an expert could under value or over value one position over another. I also believe that one expert can predict better than another. It's just like poker - everybody has the same chance of winning a given hand, but some are going to win more than others.I also believe that the experts also (maybe unknowingly) base their predictions on how well they did the previous year. Thus, the results should not be consistent from year to year...But that's only my opinion.Joel
 
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
 
GoBears84 said:
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?

I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.

Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
I think you'd only need 16 to have a statistically significant dataset :thumbup: I think it's entirely possible that an expert could under value or over value one position over another. I also believe that one expert can predict better than another. It's just like poker - everybody has the same chance of winning a given hand, but some are going to win more than others.

I also believe that the experts also (maybe unknowingly) base their predictions on how well they did the previous year. Thus, the results should not be consistent from year to year...

But that's only my opinion.

Joel
Exactly. I'm not trying to bash any FBG as they all have areas of expertise. It's just IMO that some undoubtedly have strengths in different areas, so why not try to identify those and utilize them.
 
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
 
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
Let's turn it around - why not? According to my analysis (which I still need to summarize) why is it that some experts had similar predictions for RB last year but were considerably different with DEF? Doesn't that imply that one either did a better job at the postion or the other one was worse at the position? There may be no reasonable explanation - it might just happen - which was what the original poster was postulating.
 
Last edited by a moderator:
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
Who says there needs to be a reason? Im good at my times tables but suck at division. I don't choose to be good at times tables and suck at division, thats just how it is. Some FBG's may just be poor at ranking one position or better at ranking another. That doesn't mean the FBG's do this on purpose or anything like that.
 
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
Accountability is a good thing. For a site that charges for its predictions I'd think they'd be happy to produce the track records of their prognosticators. People who ask for this are not on some sort of witch hunt. I think they truly want it for research purposes.
 
Chase Stuart said:
I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
I dont.I have a 16 team league full of 20 year veterans... and we have learned that certain people evaluate fantasy talent at certain positions better then other positions. And thats just the way it is.
 
Last edited by a moderator:
Chase Stuart said:
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
I think it is very reasonable. I find myself a much better judge of WR talent than RB talent. Better judge of QB than TE. Im not talking little bit, Im talking lots of difference in judging, for example, WR talent over RB talent.Kinda pisses me off, but whatcha gonna do?
 
Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.

In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.

Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)

 
Last edited by a moderator:
I'm simply saying I find it unlikely that there's a legitimate explanation.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
I agree 100%. My analysis have suggested that. overall, the predictions of the experts are statistically similar and that the differences that do exist are minor and probably due to chance.However, there are some differences and I think this is what make the PD so powerful. Instead of relying on just the projections of Dodds, the weight can be spread across the experts. Therefore if someone misses badly on a prediction, the impact is not as significant.

I also think that there is a "group think" here. While the experts all claim to be independent thinkers, the fact is that all of the projections are fairly similar, with some outliers. This may be because they all start out from the same baseline (2005 EOY) or because they all read the same articles and boards and there are some built in biases to follow the leader. Best example: Willie Parker. Last year was his first full year as a starter, yet all the experts under projected him by 72 + 5 points. That's too close to be random variation.

Time will tell if one expert is consistantly better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.

Joel

 
Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.

In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.

Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
It depends on how many years back the data goes. But even if three years some conclusions can be drawn. If you consistently have the highest, or close to the best, rating in say QB...why should people not judge your rankings a bit more then say Tremble who ranked consistently worst, or close to worst (just an example as I have no idea...that's my point).Why is it such a foreign thought that a FBG can be more in tuned in how to rank RB's, then say WR's? Everybody has strengths and weaknesses, so shouldn't the readers be aware of what the strengths and weaknesses are of the FBG's?

Just for the record, I'm not focused on any FBG or FBG in general. I just want to get the best rankings I can and excel in my fantasy league. This isn't a witch hunt, and nor should it be.

 
I'm simply saying I find it unlikely that there's a legitimate explanation.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
I agree 100%. My analysis have suggested that. overall, the predictions of the experts are statistically similar and that the differences that do exist are minor and probably due to chance.However, there are some differences and I think this is what make the PD so powerful. Instead of relying on just the projections of Dodds, the weight can be spread across the experts. Therefore if someone misses badly on a prediction, the impact is not as significant.

I also think that there is a "group think" here. While the experts all claim to be independent thinkers, the fact is that all of the projections are fairly similar, with some outliers. This may be because they all start out from the same baseline (2005 EOY) or because they all read the same articles and boards and there are some built in biases to follow the leader. Best example: Willie Parker. Last year was his first full year as a starter, yet all the experts under projected him by 72 + 5 points. That's too close to be random variation.

Time will tell if one expert is consistantly better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.

Joel
There is only one year of data for projections, I'm more concerned about rankings.
 
Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.

In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.

Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
It depends on how many years back the data goes. But even if three years some conclusions can be drawn. If you consistently have the highest, or close to the best, rating in say QB...why should people not judge your rankings a bit more then say Tremble who ranked consistently worst, or close to worst (just an example as I have no idea...that's my point).Why is it such a foreign thought that a FBG can be more in tuned in how to rank RB's, then say WR's? Everybody has strengths and weaknesses, so shouldn't the readers be aware of what the strengths and weaknesses are of the FBG's?

Just for the record, I'm not focused on any FBG or FBG in general. I just want to get the best rankings I can and excel in my fantasy league. This isn't a witch hunt, and nor should it be.
:goodposting: Chase is right that some things can be chalked up to random variations. If Expert A was tops in RB evaluation last year and Expert B was tops in QB evaluation just last year, that could easily be a total fluke. But if you look back at the last five years, and A is always around the top for RBs and B is always around the top of experts for QBs, that would seem to be a sufficiently large statistical sampling to give some serious weight to the idea that maybe A is better at evaluating RBs (and B at QBs).

I think all the FBG experts are very knowledgeable, and I'm not trying to pick on anybody. I'm just not sure what the harm is in making this data available for analysis and discussion.

 
Pretend for a minute that someone had developed a model that worked really well for a given position, but hadn't figured out something similar for other positions. Why wouldn't their projections for that position be better than at others?

If Lewin had kept his QB projection model to himself, don't you think that he and the FOs would have developed better projections for QBs (peak DPAR adjusted for age) than other positions?

FWIW, I think I'm onto something for WRs and RBs, but it would only work (if it works at all) for dynasty leagues.

 
Last edited by a moderator:
Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.

In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.

Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
Clayton Gray writes many of your QB articles. To do so I would think he spends more time evaluating QB then other positions. Cecil Lammey writes the offensive line articles. I assume he watchs tape of lines to make his assessments, do other writers? So he may notice changes in the quality of an offensive line that translates into changes in his projections of running backs that he isn't conscious of. Someone else may have been a tight end in high school and so focuses on tight ends during games. Of course some experts will be better at predicting one position over another. They would be considered akin to stock analysts, to be good you have to have a great understanding of the general market, but there are some who are just exceptional when it comes to their specialization. The fact that any study I have seen on the subject suggests that most projections are close to randomly correct (due to the vagaries that change during a season, injuries etc) may make this an exercise in futility anyway, but I do think as paying members we ought to have the data.
 
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?

I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.

Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
Chase, I have a great deal of respect for the FF views of you and all of the other FBG experts -- otherwise, I wouldn't be a FBG subscriber -- and a happy subscriber at that! Having said that, I would be extremely surprised if any FBG expert was equally good at evaluating all positions. Statistically, the likelihood that you (or any other FBG expert -- there's nothing about my comment that is addressed particularly to you) are exactly as good at predicting QB performance as you are at predicting RB performance, or TE performance, etc., would be extremely small, I would think. We all have our relative strengths. But if there are really no differences either among positions for a single FBG expert, or among FBG experts for a single position, why does Mike Herman do the weekly kicker reports? I'm assuming that while David & Joe and you and all the other FBG experts may be quite good at analyzing the kicker position, Mike Herman is considered to have particularly good insights/expertise/etc. in that area.I guess that with all the article archives available for previous years, I'm at a loss to understand what the rationale is for not making available to subscribers the individual FBG expert rankings for previous years as well.

 
I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?

I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.

Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.
Chase, I have a great deal of respect for the FF views of you and all of the other FBG experts -- otherwise, I wouldn't be a FBG subscriber -- and a happy subscriber at that! Having said that, I would be extremely surprised if any FBG expert was equally good at evaluating all positions. Statistically, the likelihood that you (or any other FBG expert -- there's nothing about my comment that is addressed particularly to you) are exactly as good at predicting QB performance as you are at predicting RB performance, or TE performance, etc., would be extremely small, I would think. We all have our relative strengths. But if there are really no differences either among positions for a single FBG expert, or among FBG experts for a single position, why does Mike Herman do the weekly kicker reports? I'm assuming that while David & Joe and you and all the other FBG experts may be quite good at analyzing the kicker position, Mike Herman is considered to have particularly good insights/expertise/etc. in that area.I guess that with all the article archives available for previous years, I'm at a loss to understand what the rationale is for not making available to subscribers the individual FBG expert rankings for previous years as well.
Herman owns kickers. That one's different. But we all know that, because Herman spends entirely too much on them. :banned: I disagree that the likelihood of an expert being equally good at predicting QB, RB and WR performance is small. I think the likelihood of an experts results being equally good is very small, but that's the output and not the input.

This article is six years old, but still relevant. The exact same Edgerrin James can vary between 1838 and 2749 total yards in a season based entirely on luck. Luck's a pretty powerful force, especially with projections. I think most of the experts here are pretty good at doing rankings. But until someone comes up with a theory as to why Jason Wood's QB projections are great and his RB projections are poor, I personally won't believe it. Jut seeing that his year end results vary wildly would not be enough for me.

The only reason I'm harping on this is because it's not just meaningless; it's potentially counterproductive. I'd hate for one of our subscribers to ignore someone's rankings for a bad reason, and miss out on a steal.

 
Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.

In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.

Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.

Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
Clayton Gray writes many of your QB articles. To do so I would think he spends more time evaluating QB then other positions. Cecil Lammey writes the offensive line articles. I assume he watchs tape of lines to make his assessments, do other writers? So he may notice changes in the quality of an offensive line that translates into changes in his projections of running backs that he isn't conscious of. Someone else may have been a tight end in high school and so focuses on tight ends during games. Of course some experts will be better at predicting one position over another. They would be considered akin to stock analysts, to be good you have to have a great understanding of the general market, but there are some who are just exceptional when it comes to their specialization. The fact that any study I have seen on the subject suggests that most projections are close to randomly correct (due to the vagaries that change during a season, injuries etc) may make this an exercise in futility anyway, but I do think as paying members we ought to have the data.
I think the more effort you put in to a position, the better you'll usually be. If you believe that an expert spends a lot more time on a certain position than another, then you should boost how much you weight his rankings/projections. But you don't need to see the year end results to do that, right?
 
This is one of the problems with FF. Try searching the internet for preseason fantasy rankings for any year prior to 07 and you will find maybe 2 links. It would be real nice to see who is hitting and who is missing, who has success or failure at certain positions, etc. Instead, all that data is deleted and all fantasy "experts" comes into the season unblemished.

 
Time will tell if one expert is consistantly better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.
This brings up another important point. I don't think time will necessarily tell at all, because many of us learn a lot each year. I think I get better at making projections each year, and I might change the methods I use to go about doing them. It's not much different than when a WR changes teams -- his past results become a lot less relevant.
 
Chase,

I find it odd that FBG is adverse to grading their previous year's predictions. Why not?

It would be useful for subscribers to see who did well in their predictions last year and who didn't, by player, by position, overall.

Projections are broken out by

Dodds

Smith

Heny

Wood

Tremblay

Rankings are broken out by

Pasquino

David and Joe

Smith

Borbely

Norton

Tremblay

Wimer

Tefertiller

Haseley

Henry

Gray

Bloom

Wood

Hicks

It would be useful to see over the past several years who actually on average was the best predictor of actual performance. Yes it will vary year to year, but Wall Street analysts put their predictions out there and are graded by the market on their results. Why not FBG? I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?

It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.

Personally, I'd look at past performance as one indicator of skill in predicting future results, and if the results were to show that one staffer were better than the others, I'd sort by them instead of the default.

 
I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
If you really wanted an answer, it would be rather obvious: money.From a business perspective it may prove to be very foolish. And since noone else is doing it either, why take the uneeded risk.
 
I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
If you really wanted an answer, it would be rather obvious: money.From a business perspective it may prove to be very foolish. And since noone else is doing it either, why take the uneeded risk.
They could always take the position as a positive:"We're the only Fantasy Football website that not only provides detailed analysis and predictions, but also grades ourselves on our predictions" - Footballguys.com

I personally don't believe the FBG staff is that afraid of the results that they are purposefully not publishing the results.

 
The problem would be the nitpicking and disecting of every little mistep by others. And they would.

The current business model is working well without opening themselves up for criticisms. Risk vs. reward.

 
Chase,

I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
I don't think FBG is adverse to grading their previous year's predictions. Not at all.
I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?
Disk space isn't very costly. I haven't deleted my previos year's predictions. Note: I don't do rankings or projections.
It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.
This is a very poor way of comparing/analyzing projections. Many people have come up with ways to compare projections, but I've yet to really feel very comfortable with any of them.To explain why yours isn't very effective:

1. The difference between a QB finishing as QB3 or QB8, or as QB18 or QB23, is not the same. Your method would have them treated in the same way.

2. If you had ranked McNabb as your QB1 last year, I think that would have been a better ranking than putting Manning as your QB1. McNabb + the other Philly QBs outscored Manning + the other Colts QBs, and that's more important. Your recognization that the Philly QB would produce a lot of points should be rewarded, not penalized.

3. If I had ranked Steve Smith as my WR1, that would have been a better prediction IMO than ranking Marvin Harrison as my WR1. If poster A thinks that Steve Smith when healthy and with Jake Delhomme is worse than Marvin Harrison, and poster B thinks the opposite, that's a legitimate debate. And it turns out, when Smith and Delhomme were healthy, Smith was significantly better than Harrison. But poster A would win when looking at the rankings.

Do you see why all of these examples show how analzying past predictions using your method would lead to inappropriate results?

 
Last edited by a moderator:
I guess that with all the article archives available for previous years, I'm at a loss to understand what the rationale is for not making available to subscribers the individual FBG expert rankings for previous years as well.

Herman owns kickers. That one's different. But we all know that, because Herman spends entirely too much on them. :help:

I disagree that the likelihood of an expert being equally good at predicting QB, RB and WR performance is small. I think the likelihood of an experts results being equally good is very small, but that's the output and not the input.

This article is six years old, but still relevant. The exact same Edgerrin James can vary between 1838 and 2749 total yards in a season based entirely on luck. Luck's a pretty powerful force, especially with projections. I think most of the experts here are pretty good at doing rankings. But until someone comes up with a theory as to why Jason Wood's QB projections are great and his RB projections are poor, I personally won't believe it. Jut seeing that his year end results vary wildly would not be enough for me.

The only reason I'm harping on this is because it's not just meaningless; it's potentially counterproductive. I'd hate for one of our subscribers to ignore someone's rankings for a bad reason, and miss out on a steal.

This is the most important point, its hard enough to do adequate rankings, but predictions are alot of luck. Very few people can accurately predict who will be in the playoffs from year to year and that is making projections on a global scale. Football guys (and most other sites) are famous for compression their projections to the mean. It doesn't matter if LT scores 25 or 35 touchdowns, if you selected him first and he finished with the most fantasy points, yet the error would be 50%. But, projections are necessary for people who use VBD for a snake or a dynamic auction draft. Small changes make significant changes in where players are ranked. But, the data would be interesting and what hasn't been provided is an arguement against FBG posting the data. I definitely have DD from 2006, so I should have David's final projections (since I download the week before WCOFF) and I may have 2005 as well, if someone really wants them, they can PM me here.

Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?

 
Copy.paste and print out the rankings at the beginning of the season.

Other than a few guys who are sure things if you can predict FF you can predict pork belly futures as well.

I just like the info

 
Last edited by a moderator:
Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.
 
Chase,

I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
I don't think FBG is adverse to grading their previous year's predictions. Not at all.
Prove it
I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?

Disk space isn't very costly. I haven't deleted my previos year's predictions. Note: I don't do rankings or projections.
Please publish last years predictions so we can do the work even if FBG is not going to do it.
It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.

This is a very poor way of comparing/analyzing projections. Many people have come up with ways to compare projections, but I've yet to really feel very comfortable with any of them.
Could you please provide a system you believe is fair?
To explain why yours isn't very effective;

1. The difference between a QB finishing as QB3 or QB8, or as QB18 or QB23, is not the same. Your method would have them treated in the same way.

2. If you had ranked McNabb as your QB1 last year, I think that would have been a better ranking than putting Manning as your QB1. McNabb + the other Philly QBs outscored Manning + the other Colts QBs, and that's more important. Your recognization that the Philly QB would produce a lot of points should be rewarded, not penalized.

3. If I had ranked Steve Smith as my WR1, that would have been a better prediction IMO than ranking Marvin Harrison as my WR1. If poster A thinks that Steve Smith when healthy and with Jake Delhomme is worse than Marvin Harrison, and poster B thinks the opposite, that's a legitimate debate. And it turns out, when Smith and Delhomme were healthy, Smith was significantly better than Harrison. But poster A would win when looking at the rankings.

Do you see why all of these examples show how analzying past predictions using your method would lead to inappropriate results?
You guys are thinking Fantasy Football and Statistics all year long. I'm sure you can come up with an appropriate means of scoring/measurement.If you predict Donovan McNabb to be #1 for 17 week schedule then stick by it. It's football, players get hurt, some more than others, as evidenced by McNabb, "Fragile Fred", Chris Perry, etc. Manning doesn't miss games, McNabb does.

Everyone on your staff who provides predictions is likey very knowledgable about the situation of most players (at least the top 100) and will have roughly the same information (efficient market theory). If they don't have good information, then I'd submit kthey should not be providing predictions on your site.

The resistance to publishing results with a well defined scoring system seems odd, and against the general nature of the website of fostering an open discusion to arrive at the best results.

 
Last edited by a moderator:
Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.
Although I think the raw data should be available (and really would be if we really cared enough to get people to look through their files), I agree with this, there really isn't a fair way of evaluating this. I am sure Doug Drinen could come up with the most appropriate way to test this, but I bet he might argue it is statistically meaningless. It might be as simple as percentage in top 5/10 correct, but even that would probably spread out in a bell curve. And a few people would hit it right two years in a row., just by luck. It might be more useful for us to know who was at the tail end of the curve a couple of years in a row. But, as we have all heard ad nauseum, past performance is not a predictor of future results. Even Warren Buffet has down years.
 
Chase,

I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
I don't think FBG is adverse to grading their previous year's predictions. Not at all.
Prove it
I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?

Disk space isn't very costly. I haven't deleted my previos year's predictions. Note: I don't do rankings or projections.
Please publish last years predictions so we can do the work even if FBG is not going to do it.
It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.

This is a very poor way of comparing/analyzing projections. Many people have come up with ways to compare projections, but I've yet to really feel very comfortable with any of them.
Could you please provide a system you believe is fair?
To explain why yours isn't very effective;

1. The difference between a QB finishing as QB3 or QB8, or as QB18 or QB23, is not the same. Your method would have them treated in the same way.

2. If you had ranked McNabb as your QB1 last year, I think that would have been a better ranking than putting Manning as your QB1. McNabb + the other Philly QBs outscored Manning + the other Colts QBs, and that's more important. Your recognization that the Philly QB would produce a lot of points should be rewarded, not penalized.

3. If I had ranked Steve Smith as my WR1, that would have been a better prediction IMO than ranking Marvin Harrison as my WR1. If poster A thinks that Steve Smith when healthy and with Jake Delhomme is worse than Marvin Harrison, and poster B thinks the opposite, that's a legitimate debate. And it turns out, when Smith and Delhomme were healthy, Smith was significantly better than Harrison. But poster A would win when looking at the rankings.

Do you see why all of these examples show how analzying past predictions using your method would lead to inappropriate results?
You guys are thinking Fantasy Football and Statistics all year long. I'm sure you can come up with an appropriate means of scoring/measurement.If you predict Donovan McNabb to be #1 for 17 week schedule then stick by it. It's football, players get hurt, some more than others, as evidenced by McNabb, "Fragile Fred", Chris Perry, etc. Manning doesn't miss games, McNabb does.

Everyone on your staff who provides predictions is likey very knowledgable about the situation of most players (at least the top 100) and will have roughly the same information (efficient market theory). If they don't have good information, then I'd submit kthey should not be providing predictions on your site.

The resistance to publishing results with a well defined scoring system seems odd, and against the general nature of the website of fostering an open discusion to arrive at the best results.
I'm sorry we're not seeing eye to eye here. I guess I'm not doing a good job of explaining myself. Best of luck,Chase.

 
Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.
It's quite possible to compute the difference between predicted ranking vs. actual on both an individual player level and on a position basis. If you are off with McNabb because he got hurt, then it would average out on the position with others doing well. The entire point is to see how good a particular person is year over year with predicting on a per position, per team, per player basis. If the numbers showed that Wood was very good at predicting McNabb year over year, I'm betting that most of us would be on the edge of our seats waiting for his prediction for 2007. Ditto for Cecil Lammey for Denver Broncos players. Ditto for Herman for PK.Why not publish the data for 2006 predictions and 2006 results using the same scoring system, even if you don't believe you have a good system?
 
Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.
Although I think the raw data should be available (and really would be if we really cared enough to get people to look through their files), I agree with this, there really isn't a fair way of evaluating this. I am sure Doug Drinen could come up with the most appropriate way to test this, but I bet he might argue it is statistically meaningless. It might be as simple as percentage in top 5/10 correct, but even that would probably spread out in a bell curve. And a few people would hit it right two years in a row., just by luck. It might be more useful for us to know who was at the tail end of the curve a couple of years in a row. But, as we have all heard ad nauseum, past performance is not a predictor of future results. Even Warren Buffet has down years.
I don't think Doug could come up with an appropriate way to test this, because he and I have discussed this several times before. He's come up with some systems that we both think are flawed, and neither of us have come up with a system that we both think is accurate. I'm of the opinion that there isn't an appropriate way to measure rankings, especially when they're as similar as expert fantasy rankings. There would be enough of a gap to tell you that Jason Wood's projections are better than an alphabetical listing, but I don't believe there would be enough of a gap to tell you that his projections are better than Chris Smith's. That's why you should focus on the reasoning behind their projections/rankings, and not how those rankings/projections hold up in retrospect. I don't think a percentage in top 5/10 correct would be an appropriate measure. I also agree that the luck factor, in addition to other issues, makes this a complex problem.
 
Chase, sorry we're not communicating well.

Would it be possible to post the data for last years predictions by position and by staffer as well as the actual results using the same scoring mechanism on the website?

Best of luck

 
Chase, sorry we're not communicating well.Would it be possible to post the data for last years predictions by position and by staffer as well as the actual results using the same scoring mechanism on the website? Best of luck
The actual results are readily available on FBG. I'm on my other computer and can't seem to figure out my subscriber log-in information, but they aren't hard to find.I believe someone posted the data for last year's predictions in the forums not too long ago.Out of curiosity, assuming you have this large mound of data, what would you then do with it?
 
When I clicked on the link I got this years rankings rather than the 2007. I would see if there were any trends with the overall group - does the consensus get x of out 20 of the top WRs, RBs etc - and where were the others that broke into the top 20 ranked. Outside of injury fill ins, what circumstances seemed to lead them to break through, etc., etc.

 
Problems with analyzing past rankings

If anyone could come up with a scoring system that accurately rewarded the people whose projections were the best, I'd lead the charge on analyzing everyone's past projections/rankings (although I think a lot less interesting information would be gleaned from there than most). Here are some problems I see:

1. Staffer A thinks Adrian Peterson is awesome and Chester Taylor is terrible. He ranks Peterson #8, and Taylor #54. Staffer B thinks a bit differently, ranking Peterson at #30 and Taylor at #18. Staffer A and Staffer B both agree that Taylor has a 20% chance of getting injured, which is factored into their rankings. Taylor blows his ACL on the first play of the season. Taylor finishes as the 4th best RB. Whose projection was better? It seems to me that the issue went unsolved. Yet any scoring system would reward Staffer A significantly, and punish Staffer B heavily.

2. Staffer A thinks Willie Parker has a 10% chance of getting injured. Staffer B thinks Parker has a 20% chance of getting injured. Both staffers agree on Parker's production when healthy, so Staffer B has Parker ranked #10 and Staffer A has Parker ranked #6. Consider:

A) Parker does not get hurt, but plays poorly and ranks 20th. Staffer B seems to be unfairly rewarded.

B) Parker does not get hurt, but actually had a 25% chance of getting injured. Staffer B does not receive any credit for his superior job at gauging Parker's injury risk, and gets penalized when Parker stays healthy and ranks 6th. Staffer A seems to win unjustifiably so.

C) Parker does get hurt, but actually had just a 5% chance of getting injured.

3. Staffer A thinks the loss of Tarik Glenn is going to hurt Manning. Staffer B thinks it will not. Staffer A ranks Manning 3rd, Staffer A ranks Manning 1st. The loss of Glenn doesn't hurt Manning, but Marvin Harrison gets injured and Manning only finishes third. Staffer A seems to be unfairly credited with a victory.

4. Staffer A thinks Travis Henry stinks. Staffer B thinks anyone in Denver will do well. Staffer A ranks Henry 20th, Staffer B ranks Henry 5th. Henry plays at an incredible level for 8 weeks, then gets injured for the season and ranks 30th. His replacement plays at an incredible level for the last 8 weeks. Staffer B was right on two counts, Staffer A was wrong on his only one, yet Staffer A "wins".

5. Staffer A and Staffer B both agree that McNabb will average 24 FP/G. They both agree that McNabb will play 10 games. Staffer A ranks McNabb as if he projected him to score 240 points, and his him ranked 15th. Staffer B decides to add 15 FP/G for the remaining 6 games, because that's what a replacement level QB will score. Staffer B ranks McNabb as if he projected him to score 330 FPs, ranking him 3rd. McNabb averages 24 FP/G, gets injured after 10 games, and ends the season ranked 15th. Both staffers perfectly nailed what would happen. Yet Staffer B is not rewarded at all, despite actually having the more useful ranking for drafters.

6. Staffer B thinks that Marshawn Lynch will be a stud down the stretch, and lead teams to fantasy glory. He ranks him 15th, thinking he'll be 30th for most of the season, but a RB1 when it counts. Staffer B thinks he'll probably finish around 25th. Staffer A thinks Lynch will be the same all year, and also ranks him 25th. Staffer B's prediction comes perfectly to life, and many teams with Lynch win their championship. But Lynch ranks 25th, and Staffer A wins despite being less accurate with his guess as to what will happen than Staffer A.

 
Clearly there are a number of issues with the validity of pre-season predictions and year-end results - no argument there. Chase - are you saying (without actually saying it) that FBG will not make the previous years predictions and year end results available on an easy-to-download spreadsheet? Just askin'.

Thanks

Bink

 

Users who are viewing this thread

Back
Top