All prior years are deleted.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
I think you'd only need 16 to have a statistically significant datasetDo you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Chase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Exactly. I'm not trying to bash any FBG as they all have areas of expertise. It's just IMO that some undoubtedly have strengths in different areas, so why not try to identify those and utilize them.GoBears84 said:I think you'd only need 16 to have a statistically significant datasetChase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?
I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.
ThanksI think it's entirely possible that an expert could under value or over value one position over another. I also believe that one expert can predict better than another. It's just like poker - everybody has the same chance of winning a given hand, but some are going to win more than others.
I also believe that the experts also (maybe unknowingly) base their predictions on how well they did the previous year. Thus, the results should not be consistent from year to year...
But that's only my opinion.
Joel
What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Chase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Let's turn it around - why not? According to my analysis (which I still need to summarize) why is it that some experts had similar predictions for RB last year but were considerably different with DEF? Doesn't that imply that one either did a better job at the postion or the other one was worse at the position? There may be no reasonable explanation - it might just happen - which was what the original poster was postulating.What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Chase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Who says there needs to be a reason? Im good at my times tables but suck at division. I don't choose to be good at times tables and suck at division, thats just how it is. Some FBG's may just be poor at ranking one position or better at ranking another. That doesn't mean the FBG's do this on purpose or anything like that.What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Chase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
Accountability is a good thing. For a site that charges for its predictions I'd think they'd be happy to produce the track records of their prognosticators. People who ask for this are not on some sort of witch hunt. I think they truly want it for research purposes.What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Chase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
I dont.I have a 16 team league full of 20 year veterans... and we have learned that certain people evaluate fantasy talent at certain positions better then other positions. And thats just the way it is.Chase Stuart said:I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.
I think it is very reasonable. I find myself a much better judge of WR talent than RB talent. Better judge of QB than TE. Im not talking little bit, Im talking lots of difference in judging, for example, WR talent over RB talent.Kinda pisses me off, but whatcha gonna do?What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Chase Stuart said:Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.Thanks
I agree 100%. My analysis have suggested that. overall, the predictions of the experts are statistically similar and that the differences that do exist are minor and probably due to chance.However, there are some differences and I think this is what make the PD so powerful. Instead of relying on just the projections of Dodds, the weight can be spread across the experts. Therefore if someone misses badly on a prediction, the impact is not as significant.I'm simply saying I find it unlikely that there's a legitimate explanation.
Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
It depends on how many years back the data goes. But even if three years some conclusions can be drawn. If you consistently have the highest, or close to the best, rating in say QB...why should people not judge your rankings a bit more then say Tremble who ranked consistently worst, or close to worst (just an example as I have no idea...that's my point).Why is it such a foreign thought that a FBG can be more in tuned in how to rank RB's, then say WR's? Everybody has strengths and weaknesses, so shouldn't the readers be aware of what the strengths and weaknesses are of the FBG's?Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.
In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.
Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.
Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
There is only one year of data for projections, I'm more concerned about rankings.I agree 100%. My analysis have suggested that. overall, the predictions of the experts are statistically similar and that the differences that do exist are minor and probably due to chance.However, there are some differences and I think this is what make the PD so powerful. Instead of relying on just the projections of Dodds, the weight can be spread across the experts. Therefore if someone misses badly on a prediction, the impact is not as significant.I'm simply saying I find it unlikely that there's a legitimate explanation.
Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
I also think that there is a "group think" here. While the experts all claim to be independent thinkers, the fact is that all of the projections are fairly similar, with some outliers. This may be because they all start out from the same baseline (2005 EOY) or because they all read the same articles and boards and there are some built in biases to follow the leader. Best example: Willie Parker. Last year was his first full year as a starter, yet all the experts under projected him by 72 + 5 points. That's too close to be random variation.
Time will tell if one expert is consistantly better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.
Joel
It depends on how many years back the data goes. But even if three years some conclusions can be drawn. If you consistently have the highest, or close to the best, rating in say QB...why should people not judge your rankings a bit more then say Tremble who ranked consistently worst, or close to worst (just an example as I have no idea...that's my point).Why is it such a foreign thought that a FBG can be more in tuned in how to rank RB's, then say WR's? Everybody has strengths and weaknesses, so shouldn't the readers be aware of what the strengths and weaknesses are of the FBG's?Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.
In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.
Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.
Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
Just for the record, I'm not focused on any FBG or FBG in general. I just want to get the best rankings I can and excel in my fantasy league. This isn't a witch hunt, and nor should it be.
Clayton Gray writes many of your QB articles. To do so I would think he spends more time evaluating QB then other positions. Cecil Lammey writes the offensive line articles. I assume he watchs tape of lines to make his assessments, do other writers? So he may notice changes in the quality of an offensive line that translates into changes in his projections of running backs that he isn't conscious of. Someone else may have been a tight end in high school and so focuses on tight ends during games. Of course some experts will be better at predicting one position over another. They would be considered akin to stock analysts, to be good you have to have a great understanding of the general market, but there are some who are just exceptional when it comes to their specialization. The fact that any study I have seen on the subject suggests that most projections are close to randomly correct (due to the vagaries that change during a season, injuries etc) may make this an exercise in futility anyway, but I do think as paying members we ought to have the data.Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.
In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.
Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.
Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
Chase, I have a great deal of respect for the FF views of you and all of the other FBG experts -- otherwise, I wouldn't be a FBG subscriber -- and a happy subscriber at that! Having said that, I would be extremely surprised if any FBG expert was equally good at evaluating all positions. Statistically, the likelihood that you (or any other FBG expert -- there's nothing about my comment that is addressed particularly to you) are exactly as good at predicting QB performance as you are at predicting RB performance, or TE performance, etc., would be extremely small, I would think. We all have our relative strengths. But if there are really no differences either among positions for a single FBG expert, or among FBG experts for a single position, why does Mike Herman do the weekly kicker reports? I'm assuming that while David & Joe and you and all the other FBG experts may be quite good at analyzing the kicker position, Mike Herman is considered to have particularly good insights/expertise/etc. in that area.I guess that with all the article archives available for previous years, I'm at a loss to understand what the rationale is for not making available to subscribers the individual FBG expert rankings for previous years as well.What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?
I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.
Thanks
Ouch! Totally aside from the FF stuff, that really sucks if you lost all your files and don't have backups.I had every year's rankings, projections and VBD apps saved on my desktop until it crashed.
Herman owns kickers. That one's different. But we all know that, because Herman spends entirely too much on them.Chase, I have a great deal of respect for the FF views of you and all of the other FBG experts -- otherwise, I wouldn't be a FBG subscriber -- and a happy subscriber at that! Having said that, I would be extremely surprised if any FBG expert was equally good at evaluating all positions. Statistically, the likelihood that you (or any other FBG expert -- there's nothing about my comment that is addressed particularly to you) are exactly as good at predicting QB performance as you are at predicting RB performance, or TE performance, etc., would be extremely small, I would think. We all have our relative strengths. But if there are really no differences either among positions for a single FBG expert, or among FBG experts for a single position, why does Mike Herman do the weekly kicker reports? I'm assuming that while David & Joe and you and all the other FBG experts may be quite good at analyzing the kicker position, Mike Herman is considered to have particularly good insights/expertise/etc. in that area.I guess that with all the article archives available for previous years, I'm at a loss to understand what the rationale is for not making available to subscribers the individual FBG expert rankings for previous years as well.What would be the reason that an FBG would be better at one position than another? I'm not sure I can think of a reasonable explanation.Why is it hard to imagine that an FBG can be better at one position then another? In fact I think they do. I know personally I'm better at ranking RB's and QB's, so why not FBG's?Do you think some experts are much better at some positions than others? I find it hard to imagine that Maurile Tremblay nails wide receivers but misses QBs, or that Jason Wood is great on RBs but can't rank tight ends. Unless you had 20 years of data, it'd be hard to find any meaningful results on this, IMO.I was wondering if there was a link with the FBG experts rankings prior to each season for the last two years or so?
I'm not trying to crucify somebody's rankings, but I would like to see who appears to be right at QB, RB, WR, etc on average more then the rest.
Thanks
I think the more effort you put in to a position, the better you'll usually be. If you believe that an expert spends a lot more time on a certain position than another, then you should boost how much you weight his rankings/projections. But you don't need to see the year end results to do that, right?Clayton Gray writes many of your QB articles. To do so I would think he spends more time evaluating QB then other positions. Cecil Lammey writes the offensive line articles. I assume he watchs tape of lines to make his assessments, do other writers? So he may notice changes in the quality of an offensive line that translates into changes in his projections of running backs that he isn't conscious of. Someone else may have been a tight end in high school and so focuses on tight ends during games. Of course some experts will be better at predicting one position over another. They would be considered akin to stock analysts, to be good you have to have a great understanding of the general market, but there are some who are just exceptional when it comes to their specialization. The fact that any study I have seen on the subject suggests that most projections are close to randomly correct (due to the vagaries that change during a season, injuries etc) may make this an exercise in futility anyway, but I do think as paying members we ought to have the data.Allow me to be clear here: I'm fully in favor of all of our writers and rankers being accountable for the work they provide. When we charge a fee for our service, you should expect and we hope to give you extremely high quality work. So when I say I find it unreasonable that a staffer would be good at ranking QBs but not RBs, I'm not implying that we shouldn't judge how the staffers rank players. I'm simply saying I find it unlikely that there's a legitimate explanation.
In 2005, Thomas Jones averaged 16.1 FP/G against teams whose location starts with a letter between N and Z, and 6.6 FP/G against A through M teams. Also in 2005, Steve Smith was 10 FP/G better against teams whose last names ended with a vowel than teams whose last names ended with a consonant. And it's not because teams whose names ended with a consonant had much better defenses. Sometimes, splits happen.
Force fitting an explanation because Staffer X ranked 1st in QBs and 16th in RBs is natural. It's simple to say, going forward, that you'll follow Staffer X's predictions on QBs and not his predictions on RBs. But I think that's pretty silly, unless you have years and years of rankings. Because if Staffer X ranks 1st in RBs this year (which I find just as likely as him finishing first at any other position), you're going to have really missed out. When I say I find it unlikely that a staffer is better at ranking one position than another, I mean better going forward. Of course, we'll be better at some positions in retrospect. Thomas Jones was better against N through Z teams, but I wouldn't say Jones is better against N through Z teams, and I certainly raise my expectations because of that when he plays the Patriots.
Without an explanation as to why we might think particular staffers are better at ranking one position than another, I'm going to chalk it up to the normal random variation you see in small sample sizes. (Note: this entirely omits the discussion of whether a particular system to evaluate a staffer's rankings is a good one; that's a pretty tough threshold to meet in and of itself.)
This brings up another important point. I don't think time will necessarily tell at all, because many of us learn a lot each year. I think I get better at making projections each year, and I might change the methods I use to go about doing them. It's not much different than when a WR changes teams -- his past results become a lot less relevant.Time will tell if one expert is consistantly better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.
If you really wanted an answer, it would be rather obvious: money.From a business perspective it may prove to be very foolish. And since noone else is doing it either, why take the uneeded risk.I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
They could always take the position as a positive:"We're the only Fantasy Football website that not only provides detailed analysis and predictions, but also grades ourselves on our predictions" - Footballguys.comIf you really wanted an answer, it would be rather obvious: money.From a business perspective it may prove to be very foolish. And since noone else is doing it either, why take the uneeded risk.I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
I don't think FBG is adverse to grading their previous year's predictions. Not at all.Chase,
I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
Disk space isn't very costly. I haven't deleted my previos year's predictions. Note: I don't do rankings or projections.I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?
This is a very poor way of comparing/analyzing projections. Many people have come up with ways to compare projections, but I've yet to really feel very comfortable with any of them.To explain why yours isn't very effective:It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.
I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
Prove itI don't think FBG is adverse to grading their previous year's predictions. Not at all.Chase,
I find it odd that FBG is adverse to grading their previous year's predictions. Why not?
Please publish last years predictions so we can do the work even if FBG is not going to do it.I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?
Disk space isn't very costly. I haven't deleted my previos year's predictions. Note: I don't do rankings or projections.
Could you please provide a system you believe is fair?It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.
This is a very poor way of comparing/analyzing projections. Many people have come up with ways to compare projections, but I've yet to really feel very comfortable with any of them.
You guys are thinking Fantasy Football and Statistics all year long. I'm sure you can come up with an appropriate means of scoring/measurement.If you predict Donovan McNabb to be #1 for 17 week schedule then stick by it. It's football, players get hurt, some more than others, as evidenced by McNabb, "Fragile Fred", Chris Perry, etc. Manning doesn't miss games, McNabb does.To explain why yours isn't very effective;
1. The difference between a QB finishing as QB3 or QB8, or as QB18 or QB23, is not the same. Your method would have them treated in the same way.
2. If you had ranked McNabb as your QB1 last year, I think that would have been a better ranking than putting Manning as your QB1. McNabb + the other Philly QBs outscored Manning + the other Colts QBs, and that's more important. Your recognization that the Philly QB would produce a lot of points should be rewarded, not penalized.
3. If I had ranked Steve Smith as my WR1, that would have been a better prediction IMO than ranking Marvin Harrison as my WR1. If poster A thinks that Steve Smith when healthy and with Jake Delhomme is worse than Marvin Harrison, and poster B thinks the opposite, that's a legitimate debate. And it turns out, when Smith and Delhomme were healthy, Smith was significantly better than Harrison. But poster A would win when looking at the rankings.
Do you see why all of these examples show how analzying past predictions using your method would lead to inappropriate results?
Although I think the raw data should be available (and really would be if we really cared enough to get people to look through their files), I agree with this, there really isn't a fair way of evaluating this. I am sure Doug Drinen could come up with the most appropriate way to test this, but I bet he might argue it is statistically meaningless. It might be as simple as percentage in top 5/10 correct, but even that would probably spread out in a bell curve. And a few people would hit it right two years in a row., just by luck. It might be more useful for us to know who was at the tail end of the curve a couple of years in a row. But, as we have all heard ad nauseum, past performance is not a predictor of future results. Even Warren Buffet has down years.I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I'm sorry we're not seeing eye to eye here. I guess I'm not doing a good job of explaining myself. Best of luck,Chase.Prove itI don't think FBG is adverse to grading their previous year's predictions. Not at all.Chase,
I find it odd that FBG is adverse to grading their previous year's predictions. Why not?Please publish last years predictions so we can do the work even if FBG is not going to do it.I also find it hard to believe you have deleted your previous year's predictions. Backup? Archive? Is disk space that costly?
Disk space isn't very costly. I haven't deleted my previos year's predictions. Note: I don't do rankings or projections.Could you please provide a system you believe is fair?It would be trivial for you to assign a score per player prediction (say the difference between predicted ranking and actual ranking) and then total it up aggregage, by position, by NFL team, etc.
This is a very poor way of comparing/analyzing projections. Many people have come up with ways to compare projections, but I've yet to really feel very comfortable with any of them.You guys are thinking Fantasy Football and Statistics all year long. I'm sure you can come up with an appropriate means of scoring/measurement.If you predict Donovan McNabb to be #1 for 17 week schedule then stick by it. It's football, players get hurt, some more than others, as evidenced by McNabb, "Fragile Fred", Chris Perry, etc. Manning doesn't miss games, McNabb does.To explain why yours isn't very effective;
1. The difference between a QB finishing as QB3 or QB8, or as QB18 or QB23, is not the same. Your method would have them treated in the same way.
2. If you had ranked McNabb as your QB1 last year, I think that would have been a better ranking than putting Manning as your QB1. McNabb + the other Philly QBs outscored Manning + the other Colts QBs, and that's more important. Your recognization that the Philly QB would produce a lot of points should be rewarded, not penalized.
3. If I had ranked Steve Smith as my WR1, that would have been a better prediction IMO than ranking Marvin Harrison as my WR1. If poster A thinks that Steve Smith when healthy and with Jake Delhomme is worse than Marvin Harrison, and poster B thinks the opposite, that's a legitimate debate. And it turns out, when Smith and Delhomme were healthy, Smith was significantly better than Harrison. But poster A would win when looking at the rankings.
Do you see why all of these examples show how analzying past predictions using your method would lead to inappropriate results?
Everyone on your staff who provides predictions is likey very knowledgable about the situation of most players (at least the top 100) and will have roughly the same information (efficient market theory). If they don't have good information, then I'd submit kthey should not be providing predictions on your site.
The resistance to publishing results with a well defined scoring system seems odd, and against the general nature of the website of fostering an open discusion to arrive at the best results.
It's quite possible to compute the difference between predicted ranking vs. actual on both an individual player level and on a position basis. If you are off with McNabb because he got hurt, then it would average out on the position with others doing well. The entire point is to see how good a particular person is year over year with predicting on a per position, per team, per player basis. If the numbers showed that Wood was very good at predicting McNabb year over year, I'm betting that most of us would be on the edge of our seats waiting for his prediction for 2007. Ditto for Cecil Lammey for Denver Broncos players. Ditto for Herman for PK.Why not publish the data for 2006 predictions and 2006 results using the same scoring system, even if you don't believe you have a good system?I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I don't think Doug could come up with an appropriate way to test this, because he and I have discussed this several times before. He's come up with some systems that we both think are flawed, and neither of us have come up with a system that we both think is accurate. I'm of the opinion that there isn't an appropriate way to measure rankings, especially when they're as similar as expert fantasy rankings. There would be enough of a gap to tell you that Jason Wood's projections are better than an alphabetical listing, but I don't believe there would be enough of a gap to tell you that his projections are better than Chris Smith's. That's why you should focus on the reasoning behind their projections/rankings, and not how those rankings/projections hold up in retrospect. I don't think a percentage in top 5/10 correct would be an appropriate measure. I also agree that the luck factor, in addition to other issues, makes this a complex problem.Although I think the raw data should be available (and really would be if we really cared enough to get people to look through their files), I agree with this, there really isn't a fair way of evaluating this. I am sure Doug Drinen could come up with the most appropriate way to test this, but I bet he might argue it is statistically meaningless. It might be as simple as percentage in top 5/10 correct, but even that would probably spread out in a bell curve. And a few people would hit it right two years in a row., just by luck. It might be more useful for us to know who was at the tail end of the curve a couple of years in a row. But, as we have all heard ad nauseum, past performance is not a predictor of future results. Even Warren Buffet has down years.I'm speaking for myself.Once again, I think misinformation is way worse than no information. Until someone comes up with a good way to compare and analyze past predictions, I would treat any analysis as pretty meaningless. I don't think it's a coincidence that no one in any of these threads ever comes up with a good way to measure predictions.Chase - are you speaking here for yourself or is this something that the FBG brass has discussed and decided against doing? If it was the FBG brass, can you elucidate the reasons they have against it?
I don't understand this.If you are off with McNabb because he got hurt, then it would average out on the position with others doing well.
Agreed. Disregard this pointI don't understand this.If you are off with McNabb because he got hurt, then it would average out on the position with others doing well.
The actual results are readily available on FBG. I'm on my other computer and can't seem to figure out my subscriber log-in information, but they aren't hard to find.I believe someone posted the data for last year's predictions in the forums not too long agChase, sorry we're not communicating well.Would it be possible to post the data for last years predictions by position and by staffer as well as the actual results using the same scoring mechanism on the website? Best of luck