What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

"An Indepth Look at Accuracy of Weekly Projections" (1 Viewer)

davearm said:
Driver said:
A lot of people have posted a very simple question. Are FBG projections any good? ....

The study will go from week 6 through week 16 of this fantasy season. We will only use sites that have projections (and we will settle on which sites will be tracked before the study starts). All projections will be converted to fantasy points using FBG scoring.

Based on actual fantasy points, we will record the top 20 QBs, top 35 RBs, top 50 WRs and top 15 TEs. Those are the scorecard players. Each site will have it's projections for said players converted to fantasy points and compared.

For example: P. Manning is a top 20 QB in wek 8 and throws for 260 yards, 2 passing TDs, 1 interception and has 2 rushing yards. FBG scoring calculates this as 265/20 +4*2 - 1 + 2/10 = 20.2 fantasy points... If site A projected 290 yards and 2 TDs and site B had it 265 yards, 1.8 TDs, 0.6 ints, and 3 yards. Then the comparison would look like this:

Site A translates to 22.5 FP and Site B translates to 265/20 + 1.8*4 -.6*1 + 3/10 = 13.25 + 7.2 -.6 +.3 = 20.15

Site A = 1 - ( |20.2 - 22.5| / 20.2 ) = 88.6% accuracy for P. Manning

Site B = 1 - ( |20.2 - 20.15| / 20.2 ) = 99.8% accuracy for P. Manning

The plan is to then add up for each position, an overall, etc for each site by week.

I know this isn't an exact way to meaure things. I am not sure it makes sense to add all 20 QBs and assign equal weights. I am not sure 20 QBs is the right number to assess, etc. I am posting this thread because I want it all out in the open on how best to do this.
I know we will definitely use projections and not rankings. The second you use rankings, every site has an excuse. We score our rankings differently, etc. And the point about the flex is right on. Most people have to make those decisions every week.

Some others might have misunderstood what I said when I said we will use the Top 20 QBs, etc. That list will be determined by actual fantasy performance. So if Kyle Orton busts out a huge game, his game counts. I think this gives us a good sampling of a lot of different players (instead of looking at who gets ranked as a top 20 QB which would likely be the same players almost every week). I also think it's important to total a site's projections and see how many yards, TDs, etc they were predicting. Even if they missed for a week, I would want to know that the site framed their numbers in reality (and didn't overstate TDs by 15%).
Excellent idea! Good luck with it. A couple comments:1. Please include fantasyguru.com (John Hansen)

2. Regarding selecting the top-20 QBs based on actual fpts scored each week, I think you might be introducing some bias in determining which site has the "most accurate" projections. For example, what you are proposing is similar to evaluating the performance of 10 portfolios (containing 25 stocks each) recommended by an investment professional at the beginning of the time period. Then, looking at the performance of all stocks during a specified time period, and picking the top-20 (best-performing) stocks -- and measuring each portfolio's performance based on how many of the best-performing stocks they contained.

I haven't stated the problem too clearly. Basically the problem is that you haven't included the "poor or average performers." I think a better approach would be to include (in the group of QBs to be evaluated, for example): (1) the top-20 highest-scoring QBs each week, and (2) any QB ranked in the top-15 (projections) of any site being evaluated. I think this would eliminate some of the problematic bias, and provide a more stable group of players, at each position, for evaluation purposes.

Including only the top-20 scoring QBs each week would probably benefit FBG, compared to other sites. But the major problem is that you don't want to exclude Peyton Manning because he scored outside of the top-20 one week, and Drew Brees because he scored outside the top-20 the next week. Manning (in the first week) and Brees (in the second week) would have actual points much less than predicted points for virtually every site. But you don't want to exclude these data points -- IMO they are as valid as Matt Ryan scoring much higher than predicted (and being included in the group of the top-20 scoring QBs). You want to measure both positive and negative differentials between "predicted" and "actual." Maybe focus on average "absolute % difference" to measure the average difference between predicted and actual (either positive or negative). And I think you want to divide by predicted points (22.5 rather than 20.2) in the example above for Manning, Site A.

I think this is an extremely important project. But you want to do it right (and I may be off base in some of my suggestions). Good luck.
This is well-said. Taking the actual top 20 performing QBs for a given week introduces a clear bias by excluding the subset of guys that underperform (perhaps dramatically underperform) their projection that week.

That's definitely a measure you'd want to include in the study.
I agree. Even if you're going to use raw projections instead of rank, I would prefer it be done by creating an a priori list of players at each position who will be evaluated at the end of the week's games. In my earlier post, I suggested using any quarterback listed in the top 12 by any of the sites. You could expand that to 15 or whatever number you are comfortable with, but even with much consensus, the list will include more than 15 QB's.Carson Palmer vs Matt Cassel last week are a good example of how the proposed methodology of taking after the fact points scored fails. Palmer had a horrible week, but would just be discarded from this analysis. Cassel only played because of an injury, so no one expected him to score points this week before it started. I'm more interested in who had Palmer projected lower, than who had Cassel projected to get 20 yards passing rather than 0.

Every week, there will be fullbacks who catch a td pass, or a tight end who pops out of a hole to catch 2 touchdowns, or a backup wide receiver who catches one long pass. Unless one of the experts had them in starting or high quality fantasy backup range though, what difference does it make if one projects them to get 2 catches and one to get 1?

I could appear to be a good prognosticator for the purposes of this after the fact evaluation with cutoffs by predicting Kyle Orton, Damon Huard, and Tavaris Jackson to throw for 250 yds and 2 TD's, and by predicting that every fullback scores a td.

 
davearm said:
Driver said:
A lot of people have posted a very simple question. Are FBG projections any good? ....

The study will go from week 6 through week 16 of this fantasy season. We will only use sites that have projections (and we will settle on which sites will be tracked before the study starts). All projections will be converted to fantasy points using FBG scoring.

Based on actual fantasy points, we will record the top 20 QBs, top 35 RBs, top 50 WRs and top 15 TEs. Those are the scorecard players. Each site will have it's projections for said players converted to fantasy points and compared.

For example: P. Manning is a top 20 QB in wek 8 and throws for 260 yards, 2 passing TDs, 1 interception and has 2 rushing yards. FBG scoring calculates this as 265/20 +4*2 - 1 + 2/10 = 20.2 fantasy points... If site A projected 290 yards and 2 TDs and site B had it 265 yards, 1.8 TDs, 0.6 ints, and 3 yards. Then the comparison would look like this:

Site A translates to 22.5 FP and Site B translates to 265/20 + 1.8*4 -.6*1 + 3/10 = 13.25 + 7.2 -.6 +.3 = 20.15

Site A = 1 - ( |20.2 - 22.5| / 20.2 ) = 88.6% accuracy for P. Manning

Site B = 1 - ( |20.2 - 20.15| / 20.2 ) = 99.8% accuracy for P. Manning

The plan is to then add up for each position, an overall, etc for each site by week.

I know this isn't an exact way to meaure things. I am not sure it makes sense to add all 20 QBs and assign equal weights. I am not sure 20 QBs is the right number to assess, etc. I am posting this thread because I want it all out in the open on how best to do this.
I know we will definitely use projections and not rankings. The second you use rankings, every site has an excuse. We score our rankings differently, etc. And the point about the flex is right on. Most people have to make those decisions every week.

Some others might have misunderstood what I said when I said we will use the Top 20 QBs, etc. That list will be determined by actual fantasy performance. So if Kyle Orton busts out a huge game, his game counts. I think this gives us a good sampling of a lot of different players (instead of looking at who gets ranked as a top 20 QB which would likely be the same players almost every week). I also think it's important to total a site's projections and see how many yards, TDs, etc they were predicting. Even if they missed for a week, I would want to know that the site framed their numbers in reality (and didn't overstate TDs by 15%).
Excellent idea! Good luck with it. A couple comments:1. Please include fantasyguru.com (John Hansen)

2. Regarding selecting the top-20 QBs based on actual fpts scored each week, I think you might be introducing some bias in determining which site has the "most accurate" projections. For example, what you are proposing is similar to evaluating the performance of 10 portfolios (containing 25 stocks each) recommended by an investment professional at the beginning of the time period. Then, looking at the performance of all stocks during a specified time period, and picking the top-20 (best-performing) stocks -- and measuring each portfolio's performance based on how many of the best-performing stocks they contained.

I haven't stated the problem too clearly. Basically the problem is that you haven't included the "poor or average performers." I think a better approach would be to include (in the group of QBs to be evaluated, for example): (1) the top-20 highest-scoring QBs each week, and (2) any QB ranked in the top-15 (projections) of any site being evaluated. I think this would eliminate some of the problematic bias, and provide a more stable group of players, at each position, for evaluation purposes.

Including only the top-20 scoring QBs each week would probably benefit FBG, compared to other sites. But the major problem is that you don't want to exclude Peyton Manning because he scored outside of the top-20 one week, and Drew Brees because he scored outside the top-20 the next week. Manning (in the first week) and Brees (in the second week) would have actual points much less than predicted points for virtually every site. But you don't want to exclude these data points -- IMO they are as valid as Matt Ryan scoring much higher than predicted (and being included in the group of the top-20 scoring QBs). You want to measure both positive and negative differentials between "predicted" and "actual." Maybe focus on average "absolute % difference" to measure the average difference between predicted and actual (either positive or negative). And I think you want to divide by predicted points (22.5 rather than 20.2) in the example above for Manning, Site A.

I think this is an extremely important project. But you want to do it right (and I may be off base in some of my suggestions). Good luck.
This is well-said. Taking the actual top 20 performing QBs for a given week introduces a clear bias by excluding the subset of guys that underperform (perhaps dramatically underperform) their projection that week.

That's definitely a measure you'd want to include in the study.
I agree. Even if you're going to use raw projections instead of rank, I would prefer it be done by creating an a priori list of players at each position who will be evaluated at the end of the week's games. In my earlier post, I suggested using any quarterback listed in the top 12 by any of the sites. You could expand that to 15 or whatever number you are comfortable with, but even with much consensus, the list will include more than 15 QB's.Carson Palmer vs Matt Cassel last week are a good example of how the proposed methodology of taking after the fact points scored fails. Palmer had a horrible week, but would just be discarded from this analysis. Cassel only played because of an injury, so no one expected him to score points this week before it started. I'm more interested in who had Palmer projected lower, than who had Cassel projected to get 20 yards passing rather than 0.

Every week, there will be fullbacks who catch a td pass, or a tight end who pops out of a hole to catch 2 touchdowns, or a backup wide receiver who catches one long pass. Unless one of the experts had them in starting or high quality fantasy backup range though, what difference does it make if one projects them to get 2 catches and one to get 1?

I could appear to be a good prognosticator for the purposes of this after the fact evaluation with cutoffs by predicting Kyle Orton, Damon Huard, and Tavaris Jackson to throw for 250 yds and 2 TD's, and by predicting that every fullback scores a td.
Gave it a little more thought. I think the key point is to determine the right "universe" of players at each position who will be evaluated. For example, you want to include all QBs like P. Manning, Romo, Brees, Palmer, McNabb, E. Manning, Favre, and Cutler who are established starters and most likely included in the top-20 rankings each week of every site to be evaluated (even if they don't score in the top-20 QBs for a certain week).Alternatively, going into Week 1 (last week) you wouldn't want to include any QBs like Cassel who were unlikely options to get much playing time (and the projections for the various sites would not be very meaningful when focusing on the overall "accuracy of projections").

What about the other QBs? I think you definitely want to include guys who could be selected as "reasonable starters" on fantasy teams (maybe including leagues with 2 QB starters?). I think I'd argue that you definitely want to include Rivers, Schaub, Hasselbeck, Kitna, Delhomme, Warner, T. Jackson, Russell, Garcia, and even Pennington and Campbell and Ryan and Orton and J.T. O'Sullivan. And Cassel should probably be included this week (Week 2) because he'll probably be starting for at least some fantasy teams -- and an accurate projection for Cassel is important to FF owners who may be deciding between him and other QBs on their roster.

So, exclude all non-starting QBs and any other QBs who are game-time decisions (or where starting status is unclear). Include all the rest (maybe 25-30 QBs if 16 games are to be played). I'd also develop subtotals of the accuracy measure(s) for the top-12 ranked QBs (consensus among sites being evaluated) and the "other QBs" each week (because some sites may be better at predicting performance of one group vs. the other).

It sounds like you're going to have written criteria for how players will be selected for evaluation purposes -- both inclusion and exclusion criteria -- for example, any player who incurs a serious injury (or any player who does not play any of the 2nd half) will be excluded. Specific criteria determined and published before the evaluation starts is a good idea IMO.

For RBs, a difficult problem will be how to handle the RBBCs. Off the top of my head, I think you want to include any RB expected to get at least 25-30% of the touches (% of the team's carries and receptions for RBs) -- and exclude the rest.

For WRs, I'd include the 2 starters on the depth chart for each team and add any other WRs projected to score more than a certain number of fpts. by a majority of the sites being evaluated.

Each week, I think it's important to post, before any games start, what players will be evaluated at each position (with verifiable criteria for excluding players from this list after games start). This will eliminate any questions about FBG's "cherry-picking" which players after the fact. And the list will change some from week to week, but probably not a lot -- which will provide a stable base of players to be evaluated.

IMO the most important goal is to conduct a study that is most relevant (for FF owners making decisions about which players to include in starting lineups) and produces results that are unbiased, robust and objectively reflect which site has the most accurate predictions.

Again, great idea and good luck.

 
After thinking on it more, I too think it should be done on projections and not rankings because of the Flex position.Some of the hardest decisions a fantasy owner has to make is deciding upon what player to use in that 1 flex spot.Do you use this WR, this WR or that RB for that 1 position. Those types of decisions win and lose games every week.
:tfp:
 
I know we will definitely use projections and not rankings. The second you use rankings, every site has an excuse. We score our rankings differently, etc. And the point about the flex is right on. Most people have to make those decisions every week.

Some others might have misunderstood what I said when I said we will use the Top 20 QBs, etc. That list will be determined by actual fantasy performance. So if Kyle Orton busts out a huge game, his game counts. I think this gives us a good sampling of a lot of different players (instead of looking at who gets ranked as a top 20 QB which would likely be the same players almost every week). I also think it's important to total a site's projections and see how many yards, TDs, etc they were predicting. Even if they missed for a week, I would want to know that the site framed their numbers in reality (and didn't overstate TDs by 15%).
I know a great amount of work and effort is expended by FBG staff to get the projections up on a weekly basis, a daunting task. I do much better when I ignore the weekly Projections and go with gut. Its real easy to understand that strong SS/FS in the defense are going to diminish the receiving numbers of TE and WR going over the middle. Its also easy to understand that these positions are capable of taking the bite out of a run game. Its no less understandable that a good WR2 on team "A" can enhance the numbers of WR1. The point I am making is where baseball (one million games a season) is primed for metrics and mathematics, Football is not. In football every thing is played out in 16 sets minus playoffs. Your O-Line giving the QB time to throw can help nullify strong CB play in the secondary, conversely if your guard is got an undisclosed Booboo on his left toe that can be the differential in good or bad pass protection. We all saw the effect on Payton Manning this week with his leg problem, but the real difference maker was the absence of Saturday in the lineup. Tory Holt is another example, Booger is a good QB, the Rams line was pummeled and left little time for Bulger to wait for an opening from Holt making adjustments and comebacks against a solid double team.

I win championships and do well in Fantasy Baseball leagues every year, despite watching zero games. WHY? Because the 160 plus games each season give a real good foundation to collect numerical data, football is a crap shoot outside the consensus top five. Even then as we all are aware this years stud can be next years dud.

Willie Parker looks like the second coming of LT this year after only one game? Was not the Steelers line supposed to be decimated and chewed up? or is Cleveland's Defense that bad? Ride Parker into fantasy football Valhalla or trade him now why his price is through the roof? I sure don't know.

Keep what you guys got going on over here. I may rag on your projections and the other Dodd's hard headed assessments but he is putting out a product and putting his name and reputation on the line every week. I can sit here anonymously and bag on Colin Dowling for riding the Cedric Benson bust wagon and other things, but I don't stand to lose anything. I keep coming back year in year out because overall FBG is it in fantasy football.

 
Lots of excellent discussion in here so far.

So far only 3 competitors have been named (Fantasy Index, Fantasy Guru and Draftsharks).

If you know of sites that do quality weekly projections with depth, please add their name to the list.

Keep the ideas and discussion coming :violin:

 
Lots of excellent discussion in here so far. So far only 3 competitors have been named (Fantasy Index, Fantasy Guru and Draftsharks). If you know of sites that do quality weekly projections with depth, please add their name to the list. Keep the ideas and discussion coming :headbang:
How about the big boys, like espn and sportsline? They definitely do weekly projections within their league management suite.Others to consider:thehuddlefftodayfantasysharks
 
As I stated earlier, I don't think this will bear the fruit that you are hoping for.

What about this scenario, I have E.Manning and Delhomme as QBs.

-You correctly project a modest day for Manning with results similar to QB7

-You project Delhomme for a poor day - say QB 16

Now Delhomme goes off an has a great day and Manning produces similar to projections. So to the rankings, it's 1 success and 1 blown - but I took the advice and started Manning and lost but would have won with Delhomme.

Now I know I make my own chocies so it's my fault - but aren't you hoping your projections are more accurate? It's relative to me as I don't have Manning, Romo, etc on my roster - my choice is Manning or Delhomme.

And ultimately, we're looking for insight into our roster decisions.

 
Tatum Bell said:
David, how many points are awarded to players who are not within the top 20 QBs, top 35 RBs, top 50 WRs and top 15 TEs in your rankings, but who end up there, or who are within the top 20 QBs, top 35 RBs, top 50 WRs and top 15 TEs but who don't end up placing there when it's all said and done?
We will use actual results to determine the players selected each week (not players we had projected high). If Frisman Jackson is a top player then his stats count for that week and we all get compared on how well we predicted this player. I think this is an important distinction because some break out performances are predictable (injury replacements, etc) and these need to be scored..

 
Tatum Bell said:
David, how many points are awarded to players who are not within the top 20 QBs, top 35 RBs, top 50 WRs and top 15 TEs in your rankings, but who end up there, or who are within the top 20 QBs, top 35 RBs, top 50 WRs and top 15 TEs but who don't end up placing there when it's all said and done?
We will use actual results to determine the players selected each week (not players we had projected high). If Frisman Jackson is a top player then his stats count for that week and we all get compared on how well we predicted this player. I think this is an important distinction because some break out performances are predictable (injury replacements, etc) and these need to be scored.
on the flip side, if only one website correctly projects a disaster of a game for a player who may have a tough matchup that week, they won't be given credit for getting it right because that player won't make the final comparison list. it does seem like you need to balance this out a bit more to include those types of situations. finding breakout candidates is always helpful, but the decisions I think most of us struggle with on a weekly basis are the ones when we try to decide if it's the right move to sit a player that we drafted to be an everyweek starter.not sure what the best way is to do this, but it does seem like a critical piece to include.

 
Last edited by a moderator:
Driver said:
Gave it a little more thought. I think the key point is to determine the right "universe" of players at each position who will be evaluated. For example, you want to include all QBs like P. Manning, Romo, Brees, Palmer, McNabb, E. Manning, Favre, and Cutler who are established starters and most likely included in the top-20 rankings each week of every site to be evaluated (even if they don't score in the top-20 QBs for a certain week).Alternatively, going into Week 1 (last week) you wouldn't want to include any QBs like Cassel who were unlikely options to get much playing time (and the projections for the various sites would not be very meaningful when focusing on the overall "accuracy of projections").What about the other QBs? I think you definitely want to include guys who could be selected as "reasonable starters" on fantasy teams (maybe including leagues with 2 QB starters?). I think I'd argue that you definitely want to include Rivers, Schaub, Hasselbeck, Kitna, Delhomme, Warner, T. Jackson, Russell, Garcia, and even Pennington and Campbell and Ryan and Orton and J.T. O'Sullivan. And Cassel should probably be included this week (Week 2) because he'll probably be starting for at least some fantasy teams -- and an accurate projection for Cassel is important to FF owners who may be deciding between him and other QBs on their roster.So, exclude all non-starting QBs and any other QBs who are game-time decisions (or where starting status is unclear). Include all the rest (maybe 25-30 QBs if 16 games are to be played). I'd also develop subtotals of the accuracy measure(s) for the top-12 ranked QBs (consensus among sites being evaluated) and the "other QBs" each week (because some sites may be better at predicting performance of one group vs. the other).It sounds like you're going to have written criteria for how players will be selected for evaluation purposes -- both inclusion and exclusion criteria -- for example, any player who incurs a serious injury (or any player who does not play any of the 2nd half) will be excluded. Specific criteria determined and published before the evaluation starts is a good idea IMO.For RBs, a difficult problem will be how to handle the RBBCs. Off the top of my head, I think you want to include any RB expected to get at least 25-30% of the touches (% of the team's carries and receptions for RBs) -- and exclude the rest.For WRs, I'd include the 2 starters on the depth chart for each team and add any other WRs projected to score more than a certain number of fpts. by a majority of the sites being evaluated.Each week, I think it's important to post, before any games start, what players will be evaluated at each position (with verifiable criteria for excluding players from this list after games start). This will eliminate any questions about FBG's "cherry-picking" which players after the fact. And the list will change some from week to week, but probably not a lot -- which will provide a stable base of players to be evaluated.IMO the most important goal is to conduct a study that is most relevant (for FF owners making decisions about which players to include in starting lineups) and produces results that are unbiased, robust and objectively reflect which site has the most accurate predictions.Again, great idea and good luck.
Although you are probably right that we should use criteria like this to determine which players to choose each week, the second we do the study gets ridiculed that we chose players that made us look good, etc. Our plan is to use the players based on ACTUAL results that week. Yes we will all miss the QB that played well in relief of an injury, etc. But that's fine. By using ACTUALs as the baseline players, it's obvious we aren't skewing the data. We have an absolute procedure with no subjectivity. And by going as deep as we are going, these are the players people should have PLAYED this week. That's what we want to try and measure.
 
Last edited by a moderator:
on the flip side, if only one website correctly projects a disaster of a game for a player who may have a tough matchup that week, they won't be given credit for getting it right because that player won't make the final comparison list. it does seem like you need to balance this out a bit more to include those types of situations. finding breakout candidates is always helpful, but the decisions I think most of us struggle with on a weekly basis are the ones when we try to decide if it's the right move to sit a player that we drafted to be an everyweek starter.not sure what the best way is to do this, but it does seem like a critical piece to include.
Show me how to do this without it looking like we are subjectively picking players or skewing the data and I am all ears. We will publish all the data. This is primarily looking at a way to roll up a number for all QBs, RBs, etc. I don't disagree there is something here, but I am going to need a method to implement it that does not suggest we stacked the players considered. Maybe create a top 50 player list + the top performances? But what Top 50 do we use each week? FBGs top 250 forward? I can't emphasize enough that I do not want to expend a ton of resources only to have this thing ridiculed as biased and worthless. Although I hear the argument you need to reward sites that said to avoid Torry Holt this first week, I think adding subjectivity on who gets counted ruins the results. On teams where the top RB or QB aren't givens, who gets automatically included?
 
Sources we are planning to track right now (but open to others):

FBG (Dodds)

FBG (Bloom)

ESPN

Yahoo

Fantasy Guru

Fantasy Index

The Huddle

KFFL (Free)

Fantasy Sharks (Free)

Draft Sharks

Fanball

FF Today (Free)

FF Mastermind

FF Docs

 
Last edited by a moderator:
on the flip side, if only one website correctly projects a disaster of a game for a player who may have a tough matchup that week, they won't be given credit for getting it right because that player won't make the final comparison list. it does seem like you need to balance this out a bit more to include those types of situations. finding breakout candidates is always helpful, but the decisions I think most of us struggle with on a weekly basis are the ones when we try to decide if it's the right move to sit a player that we drafted to be an everyweek starter.not sure what the best way is to do this, but it does seem like a critical piece to include.
Show me how to do this without it looking like we are subjectively picking players or skewing the data and I am all ears. We will publish all the data. This is primarily looking at a way to roll up a number for all QBs, RBs, etc. I don't disagree there is something here, but I am going to need a method to implement it that does not suggest we stacked the players considered. Maybe create a top 50 player list + the top performances? But what Top 50 do we use each week? FBGs top 250 forward? I can't emphasize enough that I do not want to expend a ton of resources only to have this thing ridiculed as biased and worthless. Although I hear the argument you need to reward sites that said to avoid Torry Holt this first week, I think adding subjectivity on who gets counted ruins the results. On teams where the top RB or QB aren't givens, who gets automatically included?
You take any player, projected by ANY of the sites being monitored, among the Top "X" at that position in the pre-game projections. Let's say top 24 running backs. If Fantasy Sharks has Julius Jones inside the top 24 this week, even if he is outside the FBG top 24, he counts for that week, and then you look at whether your projection or Fantasy Sharks projection was closer on Julius Jones. But if none of the sites being monitored, including FBG, have a particular running back in the Top 24 for that week, then he is not included in the analysis of (points actually scored vs points projected). You may actually end up with something like 30 different running backs, because there will be disagreements at the bottom of the list, though most of the same names will appear. By this method, you are letting all the participants determine who counts and who does not, not just your opinion or rankings.
 
on the flip side, if only one website correctly projects a disaster of a game for a player who may have a tough matchup that week, they won't be given credit for getting it right because that player won't make the final comparison list. it does seem like you need to balance this out a bit more to include those types of situations. finding breakout candidates is always helpful, but the decisions I think most of us struggle with on a weekly basis are the ones when we try to decide if it's the right move to sit a player that we drafted to be an everyweek starter.not sure what the best way is to do this, but it does seem like a critical piece to include.
Show me how to do this without it looking like we are subjectively picking players or skewing the data and I am all ears. We will publish all the data. This is primarily looking at a way to roll up a number for all QBs, RBs, etc. I don't disagree there is something here, but I am going to need a method to implement it that does not suggest we stacked the players considered. Maybe create a top 50 player list + the top performances? But what Top 50 do we use each week? FBGs top 250 forward? I can't emphasize enough that I do not want to expend a ton of resources only to have this thing ridiculed as biased and worthless. Although I hear the argument you need to reward sites that said to avoid Torry Holt this first week, I think adding subjectivity on who gets counted ruins the results. On teams where the top RB or QB aren't givens, who gets automatically included?
I think the simple way is to include the top 20 QBs (based on season total) and the top 20 QBs for that week. Then, there is no bias and you get the chance to predict a down week someone.
 
JKL said:
I agree. Even if you're going to use raw projections instead of rank, I would prefer it be done by creating an a priori list of players at each position who will be evaluated at the end of the week's games. In my earlier post, I suggested using any quarterback listed in the top 12 by any of the sites. You could expand that to 15 or whatever number you are comfortable with, but even with much consensus, the list will include more than 15 QB's.
I am starting to think this is the correct approach. A bit harder on us doing the work, but this does factor in the decisons most of us are making each week and rewards those sites that are correct in stating someone is a bad play as well.
 
Sources we are planning to track right now (but open to others):FBG (Dodds)FBG (Bloom)ESPNYahooFantasy GuruFantasy IndexThe HuddleKFFL (Free)Fantasy Sharks (Free)Draft SharksFanballFF Today (Free)FF MastermindFF Docs
This is going to be awesome and the "out of the box" approach that makes you guys so special. I have asked for this every year and you stepped up. Sincere thanks.One thing. I know a ton of people use CBS Sportsline and I think they would be an added value to this set. Plus it adds to wha the other guys in your league might be doing. Wheter wrong or right.Plus this list are some pretty specific (unknown to me and I have been doing ffl for 14 years) sites. CBS gives you another broad based sampling.
 
Last edited by a moderator:
on the flip side, if only one website correctly projects a disaster of a game for a player who may have a tough matchup that week, they won't be given credit for getting it right because that player won't make the final comparison list. it does seem like you need to balance this out a bit more to include those types of situations. finding breakout candidates is always helpful, but the decisions I think most of us struggle with on a weekly basis are the ones when we try to decide if it's the right move to sit a player that we drafted to be an everyweek starter.not sure what the best way is to do this, but it does seem like a critical piece to include.
Show me how to do this without it looking like we are subjectively picking players or skewing the data and I am all ears. We will publish all the data. This is primarily looking at a way to roll up a number for all QBs, RBs, etc. I don't disagree there is something here, but I am going to need a method to implement it that does not suggest we stacked the players considered. Maybe create a top 50 player list + the top performances? But what Top 50 do we use each week? FBGs top 250 forward? I can't emphasize enough that I do not want to expend a ton of resources only to have this thing ridiculed as biased and worthless. Although I hear the argument you need to reward sites that said to avoid Torry Holt this first week, I think adding subjectivity on who gets counted ruins the results. On teams where the top RB or QB aren't givens, who gets automatically included?
The actual pts (final stats) from Week 1 for QBs:1. 30.4 McNabb (predicted at #15, 16.8 pts)2. 28.0 Brees (#3, 19.1)3. 23.9 Cutler (#11, 17.2)4. 22.8 Rivers (#20, 16.2)5. 22.4 Rodgers (#9, 17.6)6. 20.1 Kitna (#8, 17.8)7. 19.6 Pennington (#30, 13.4)8. 18.9 Romo (#2, 19.3)9. 18.5 Schaub (#14, 17.0)10. 18.4 TJackson (25, 15.3)11. 17.9 Favre (#19, 16.3)12. 17.7 Russell (#31, 13.0)13. 16.9 PManning (#6, 18.7)14. 16.6 Roethlisberger (#5, 18.8)15. 16.4 Delhomme (#26, 14.9)16. 16.2 Flacco (#28, 14.4)17. 15.7 EManning (#16, 16.5)18. 14.7 Edwards (#27, 14.7)19. 14.2 Garcia (#21, 16.1)20. 13.5 Warner (#4, 18.8)*****************************In addition to Brady (injured), the following QBs were included in the "projected top-20" but not included in the "actual top-20":23. 12.5 Hasselbeck (projected at #18, 16.3 pts)24. 12.1 Anderson (#12, 17.1)26. 10.7 Campbell (#10, 17.2)27. 8.9 O'Sullivan (#17, 16.4)31. 7.9 Young (#13, 17.1)32. 5.5 Palmer (#7, 18.3)Just for comparison purposes, fantasyguru.com had Anderson, O'Sullivan, Young and Palmer in their "projected top-20" but not Hasselbeck or Campbell. However, they also had Garrard and Bulger in their "projected top-20" whereas FBG did not.Neither FBG or fantasyguru.com had the following in their projected top-20: Pennington (actual #7), TJackson (actual #10), Russell (actual #12), Delhomme (actual #15), Flacco (actual #16), Edwards (actual #18), or Garcia (actual #19).With what I outlined above, the group of players (QBs) to be evaluated would include some of the following categories:1. Top-20 actual scores2. Anderson (included in projected top-15 by both sites -- but not actual top-20)3. Campbell, Young, Garrard, and Bulger (included in projected top-15 by only one site -- but not actual top-20)4. Anderson, O'Sullivan, Young and Palmer (included in projected top-20 by both sites -- but not actual top-20)5. Hasselbeck, Campbell, Garrard and Bulger (included in projected top-20 by only one site -- but not actual top-20)6. Ryan, Orton, Croyle - other undisputed startersI think the last category (Ryan, Orton, Croyle) is arguable whether to include or not. However, I think limiting to only top-20 actual scores and eliminating the other highly-projected QBs (Anderson, O'Sullivan, Young, Palmer, Hasselbeck, Campbell, Garrard and Bulger) would produce biased results -- mainly by excluding the "below-average" perfomers (based on actual pts for Week 1) who were expected to perform well by at least one site.This is obviously a difficult problem to get right. Dividing by projected pts, you get a difference for McNabb of (16.8 - 30.4)/16.8, or 81% underprediction. At the other end of the scale, for Palmer you get an overprediction of 70% ((18.3 - 5.5)/18.3).If you don't include Anderson, O'Sullivan, Young, Palmer, Hasselbeck, Campbell, Garrard and Bulger, then you'll be excluding a lot of overpredictions, while including a lot of underpredictions. IMO, for a complete picture of "accuracy of predictions," you need both categories.
 
JKL said:
I agree. Even if you're going to use raw projections instead of rank, I would prefer it be done by creating an a priori list of players at each position who will be evaluated at the end of the week's games. In my earlier post, I suggested using any quarterback listed in the top 12 by any of the sites. You could expand that to 15 or whatever number you are comfortable with, but even with much consensus, the list will include more than 15 QB's.
I am starting to think this is the correct approach. A bit harder on us doing the work, but this does factor in the decisons most of us are making each week and rewards those sites that are correct in stating someone is a bad play as well.
I think you need to reconsider. For the average player in a 12 team league, if they have a top 12 QB, and most of them will, they are going to start them. Rarely am I looking to FBG's with help in determining wheather or not I'm going o start a Randy Moss or an LT. The rule goes - always start your studs.The differentiation comes into play in the 2nd tier players. RB2 vs. RB3. RB3 vs. WR3 in a Flex league.

My recommendation would be to pick the top 200 players each week prior to the start of games Sunday and track how they did + to projections. If you are only picking the top 12 QB's, in most cases you are going to see who under predicted them. I think it is as valuable to see who you over predicted.

 
David, I'd suggest you go to MFL and look at some real leagues. Go to Reports -> Player -> Starter points - player.

In there you can do reports by position and see how many times each player was started. For example, here's a start 1 QB league of mine last year:

Player # fantasy starts

Brady, Tom NEP QB 16

Romo, Tony DAL QB 16

Manning, Peyton IND QB 15

Roethlisberger, Ben PIT QB 15

Hasselbeck, Matt SEA QB 15

Palmer, Carson CIN QB 15

Kitna, Jon DET QB 15

Favre, Brett GBP QB 11

Brees, Drew NOS QB 11

McNabb, Donovan PHI QB 11

Garcia, Jeff TBB QB 7

Young, Vince TEN QB 7

Rivers, Philip SDC QB 7

Bulger, Marc STL QB 6

Warner, Kurt ARI QB 5

Cutler, Jay DEN QB 4

Anderson, Derek CLE QB 3

Schaub, Matt HOU QB 3

Delhomme, Jake CAR QB 2

Garrard, David JAC QB 2

Manning, Eli NYG QB 2

Huard, Damon KCC QB 2

Losman, J.P. BUF QB 2

Redman, Chris ATL QB 1

Huard, Damon KCC QB 1

Collins, Todd WAS QB 1

Clemens, Kellen NYJ QB 1

Carr, David CAR QB 1

Rosenfels, Sage HOU QB 1

Pennington, Chad NYJ QB 1

Harrington, Joey ATL QB 1

Feeley, A.J. PHI QB 1

Griese, Brian CHI QB 1

Culpepper, Daunte OAK QB 1

McNair, Steve BAL QB 1

So 8 QBs were started every game they played or every game they played minus.

6 other players started between 4 and 7 times. Another 7 QBs started at least 2 fantasy games.

I'd think at the very least you should include enough you'd get those top 14 who started 1/4 of a fantasy season. I think looking at those will also show some interesting things, like it isn't open and shut that the top 12 QBs start. Derek Anderson only started 3 games in that league, because he was a backup to Drew Brees. Garcia and VY started 7 times each despite being QBs 17 and 18 at the end of the year. Guys like that I think should be included since obviously there were teams having to choose between them and other QBs.

 
on the flip side, if only one website correctly projects a disaster of a game for a player who may have a tough matchup that week, they won't be given credit for getting it right because that player won't make the final comparison list. it does seem like you need to balance this out a bit more to include those types of situations. finding breakout candidates is always helpful, but the decisions I think most of us struggle with on a weekly basis are the ones when we try to decide if it's the right move to sit a player that we drafted to be an everyweek starter.not sure what the best way is to do this, but it does seem like a critical piece to include.
Show me how to do this without it looking like we are subjectively picking players or skewing the data and I am all ears. We will publish all the data. This is primarily looking at a way to roll up a number for all QBs, RBs, etc. I don't disagree there is something here, but I am going to need a method to implement it that does not suggest we stacked the players considered. Maybe create a top 50 player list + the top performances? But what Top 50 do we use each week? FBGs top 250 forward? I can't emphasize enough that I do not want to expend a ton of resources only to have this thing ridiculed as biased and worthless. Although I hear the argument you need to reward sites that said to avoid Torry Holt this first week, I think adding subjectivity on who gets counted ruins the results. On teams where the top RB or QB aren't givens, who gets automatically included?
No love for this idea?
By week six or whenever this is starting, we'll have PPG averages for everyone, yes? Then instead of using the top-20 or whatever, why don't you compare projections for the 20 players whose performance that week deviated the most from their season average? Say, for example, that through six weeks Amani Toomer is averaging 3.5 fantasy points per game. In week 7 he scores 14 fantasy points. He would be one of the players you would use.Assume also that Peyton Manning is averaging 20 PPG through six weeks, and in week 7 he scores 11 points. He might be another one of the players you would use. Etc.This would allow you to compare the abilities of the various sites to correctly identify players who will significantly under- and over-perform their expected production.
It's completely objective, and will allow you to balance the overperformers with the underperformers. You could combine this, perhaps, with one of the other methods discussed in this thread to ensure that you are covering an reasonable sample of players.
 
Sorry for the delay in keeping this on track.

We have some excellent discussion so far, but we need to move forward towards conclusions so this can be documented in advance of starting.

We have a list of who is going to be compared to David Dodds & Sigmund Blooms projections.

Now we need to determine at the very minimum

a) How many players get included in the projections eg top 12 before and after, just the top 20 after etc

b) Once we decide on the volume of players used, how do we determine good projections & who has the best overall?

I know there's been some great ideas put forward so far, any further comment?

 
Any further input or will we just move forward with what we've got so far?
Andy,I think it would help if you provided us some insight. Which ideas are YOU considering and why (or why not in some cases)? I think that the method you pick, as long as there is some statistical basis to it, will be (mostly) accepted.Where there seems to be the biggest discussion is the number of players picked and when do you pick them, before or after the weekly games.I am still strongly in favor of the Top 200. This will cover the over projections and the under projections and give you an unbiased sample size with results that have meaning to your subscribers.
 
Any further input or will we just move forward with what we've got so far?
Andy,I think it would help if you provided us some insight. Which ideas are YOU considering and why (or why not in some cases)? I think that the method you pick, as long as there is some statistical basis to it, will be (mostly) accepted.Where there seems to be the biggest discussion is the number of players picked and when do you pick them, before or after the weekly games.I am still strongly in favor of the Top 200. This will cover the over projections and the under projections and give you an unbiased sample size with results that have meaning to your subscribers.
Thanks GoBears,There are plenty of suggestions in this thread as to the correct approach to take, some vastly different.I'm trying to avoid getting too involved in the decision making process, as that may compromise the findings we have at the conclusion of the study. We want this to be as open, fair and balanced as possible, with as little leverage for debate/whining as we can at the conclusion of the study. The numbers should speak for themself.What I will do is sum up where we are, list a few choices and prepare for the commencement of the study.I'll get a good chance to do that tomorrow and we'll get working on this through the week.
 
:

For example: P. Manning is a top 20 QB in wek 8 and throws for 260 yards, 2 passing TDs, 1 interception and has 2 rushing yards. FBG scoring calculates this as 265/20 +4*2 - 1 + 2/10 = 20.2 fantasy points... If site A projected 290 yards and 2 TDs and site B had it 265 yards, 1.8 TDs, 0.6 ints, and 3 yards. Then the comparison would look like this:

Site A translates to 22.5 FP and Site B translates to 265/20 + 1.8*4 -.6*1 + 3/10 = 13.25 + 7.2 -.6 +.3 = 20.15

Site A = 1 - ( |20.2 - 22.5| / 20.2 ) = 88.6% accuracy for P. Manning

Site B = 1 - ( |20.2 - 20.15| / 20.2 ) = 99.8% accuracy for P. Manning

:
Sorry to be late in the game, but I would calculate the variance differently. Are these sets of projections really equal?

Site_ _ _ _ _ PYds _PTds _Int _ Ryds _ FP

FBG Scoring _ 0.05 _ 4 _ _-1 _ _ 0.1

Actual _ _ _ _ 260 _ 2 _ _ 1 _ _ 2 _ _ 20.2

Site A _ _ _ _ 323 _ 1 _ _ 0 _ _ 0 _ _ 20.15

Site B _ _ _ _ 265 _ 1.8 _ 0.6 _ 3 _ _ 20.15

To show that these projections are not equally accurate I'd compare the components first and then use the FBG scoring to weight the components.

Site_ _ PYds _ _ _ _ _ __ PTds _ _ _ __ Int _ _ _ __ Ryds_ _ _ _ __ Var __ Score

Site A _|260-323| * 0.05 _|2 -1 | * 4 _ |1-0| * 1 __ |2-0| * 0.1

_ _ _ _ __ 3.15 _ _ _ _ __ 4 _ _ _ _ _ _ 1 _ _ _ _ _ _ _ 0.2 _ _ __ 8.35 __ 59

Site B _|260-265| * 0.05 _|2-1.8| * 4 _ |1-0.6| * 1 _|2-3| * 0.1

_ _ _ _ __ 0.25 _ _ _ _ __ 0.8 _ _ _ _ _ 0.4 _ _ _ _ _ _ 0.1 _ _ __ 1.55 __ 92

For the score I just used essentially the same method you used

Site A = (|20.2 - 8.35|)/20.2

Site B = (|20.2 - 1.55|)/20.2

I'm guessing there is a better methodology for this also, but I'd have to yield to the mathematicians of the world.

To me this method is really no more difficult for you to setup and it has the advantage of not allowing a site to get lucky when one blown stat category makes up for another. I might keep total Rushing and Receiving TDs especially for running back instead of keeping them separate if we assume that most leagues use the same scoring, but otherwise comparing the components allows the results to be more meaningful for those not using "standard FBG" scoring.

I don't expect you to actually change anything in your proposed methodology, but just wanted to throw this out there.

 
We just received an email from one of the bigger premium sites to remain out of the study. This is the problem we may have with doing such a study. If we don't include these major sites, people are going to say the study is inconclusive. If we include them and they perform badly, the bad performers will all ask to be removed and cite that we do not have permission to copy/post their premium content.

Andy is going to post soon with the strategy we plan on using here. We are likely going to need to consult with our lawyers to determine what we can and can not do here (in regards to using someone's premium content after the week passes).

 
We just received an email from one of the bigger premium sites to remain out of the study. This is the problem we may have with doing such a study. If we don't include these major sites, people are going to say the study is inconclusive. If we include them and they perform badly, the bad performers will all ask to be removed and cite that we do not have permission to copy/post their premium content. Andy is going to post soon with the strategy we plan on using here. We are likely going to need to consult with our lawyers to determine what we can and can not do here (in regards to using someone's premium content after the week passes).
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
 
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
 
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
Old projections are worthless except for the type of study you're now doing - it's the old stats that have lasting value for fantasy players. Take a look at each site's Terms of Use and see what they say about use/publication of data. I think at a minimum you can do your study and publish the results with the websites' names, but don't publish the raw data; allow a handful of certain respected non-staffer posters here confirm that your data was correct (and no, I'm not presuming to nominate myself for such a role).

Your only mistake there could be wrongfully representing what the other site's data was, but I'm figuring your whole point is to get accurate results, so I have every confidence that won't happen.

 
one site does not want to be included...(ff mastermind ??)

in their palce could 4for4.com be included?

Thanks for the taking on this project

 
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
I can imagine sites changing things "last minute" as well and then you're "in a spot" trying to prove what it was.
 
Bri said:
David Dodds said:
Tatum Bell said:
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
I can imagine sites changing things "last minute" as well and then you're "in a spot" trying to prove what it was.
Take screen shots. Pretty simple really. It is a valid point otherwise. Because sites do update their game predictions as the week goes on, you should get the data from the different sites at the same time, and probably as close as possible to the first games of the week.

 
Bri said:
David Dodds said:
Tatum Bell said:
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
I can imagine sites changing things "last minute" as well and then you're "in a spot" trying to prove what it was.
Take screen shots. Pretty simple really. It is a valid point otherwise. Because sites do update their game predictions as the week goes on, you should get the data from the different sites at the same time, and probably as close as possible to the first games of the week.
Of course TB, but I would figure Dodds or Andy or whomever doesn't want to be in that spot of having to prove it. It's a pandora's box thing
 
Bri said:
David Dodds said:
Tatum Bell said:
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
I can imagine sites changing things "last minute" as well and then you're "in a spot" trying to prove what it was.
Take screen shots. Pretty simple really. It is a valid point otherwise. Because sites do update their game predictions as the week goes on, you should get the data from the different sites at the same time, and probably as close as possible to the first games of the week.
Or just collect the data after the games are completed. That way you're assured that you're using everyone's final projections.Hard to imagine any of these sites are going back and fudging their forecast numbers after the results are in. Might be an enlightening sidebar study, but I'd be shocked if it happens.

 
Bri said:
David Dodds said:
Tatum Bell said:
As long as you don't publish the other sites' content ahead of time, I see no problem with this study, nor do I see a problem with using their data without permission so long as you pay for it and are giving proper attribution (and of course accurately posting it).
This is tricky though as it is data that is posted as premium content. So we really can't ever show that data. We can analyze it all we want, but could get into legal issues if we ever posted the content (even a year later). Just like another site can't publish articles we have written, etc. This is something I am going to consult with our lawyers. We are in a bad spot here because if we hide the data behind the numbers, it looks fishy. If we publish the data with the numbers (our intended purpose), we could be walking into a lawsuit unless we get permission. And I could see permission being taken away as soon as a site does poorly here. Not sure what the solution is here. A lot of sites that don't go through the rigor that we do publishing weekly projections have a lot to lose and little to gain by being part of this in my opinion.
I can imagine sites changing things "last minute" as well and then you're "in a spot" trying to prove what it was.
Take screen shots. Pretty simple really. It is a valid point otherwise. Because sites do update their game predictions as the week goes on, you should get the data from the different sites at the same time, and probably as close as possible to the first games of the week.
Or just collect the data after the games are completed. That way you're assured that you're using everyone's final projections.Hard to imagine any of these sites are going back and fudging their forecast numbers after the results are in. Might be an enlightening sidebar study, but I'd be shocked if it happens.
No way. If they're concerned enough to email Dodds about this, then I could see them fudging their numbers to look better in this study. As for Dodds having to "prove" his findings, isn't that precisely what he needs to do to have a reliable study anyway?

 
No way. If they're concerned enough to email Dodds about this, then I could see them fudging their numbers to look better in this study. As for Dodds having to "prove" his findings, isn't that precisely what he needs to do to have a reliable study anyway?
this is exactly the pandora's box openning that I was talking about
 
No way. If they're concerned enough to email Dodds about this, then I could see them fudging their numbers to look better in this study. As for Dodds having to "prove" his findings, isn't that precisely what he needs to do to have a reliable study anyway?
this is exactly the pandora's box openning that I was talking about
Nobody likes to be audited, so this stuff goes with the territory. The irony here is that FBG's may not end up on top, and what could be better advertising for another site than to be able to say "#1 in-season projections per exhaustive Footballguys.com study!"?
 
Hard to imagine any of these sites are going back and fudging their forecast numbers after the results are in. Might be an enlightening sidebar study,
let us know what you find out
I think you have me mistaken for the folks who are actually undertaking the study.Once they've got the data collection protocols in place for the study, then it should be cake to press "go" once on Sunday morning, and then again on, say, Tuesday morning, and then compare the two samples.I'd be pretty surprised if they were different, but if they were, then that'd certainly be something worth knowing, too.
 
Lots of fantastic ideas in this thread, but pending what David is chasing up and with a little compromise here and there I think there's a workable method that isn't too complicated, is fair and should be a reliable pointer into good projections.

Believe me, every comment in this thread has been taken into consideration. Some are very difficult to implement this time around, others contradict other suggestions, so just stating where we are at.

This isn't final and feedback is more than welcome:

Sites intended for inclusion have been listed previously, we'll see if that gets reduced before we officially start.

The players to be analyzed are ANY QB listed in someone's Top 12 before that start of the games, ANY RB in someone's Top 24, ANY WR listed in someone's Top 30 and ANY TE listed in someone's Top 12. These players and these players only will count towards the projection analysis regardless of their finishing position.

This should end up being a healthy list each week and avoids including the type of players nobody thought were starter worthy who have an unexpected big game.

Scoring:

In an effort to compromise between those who want projections and those who want some kind rankings involvement, any player that is ranked in the Top third and finishes in the top third will get a score of 100%. To clarify any player that is projected to finish as a Top 4 QB, Top 8 RB, Top 10 WR or Top 4 TE and actually does gets a 100% score.

All other players go to the formula David mentioned in the opening post

Site A = 1 - ( |20.2 - 22.5| / 20.2 ) = 88.6% accuracy for P. Manning

Site B = 1 - ( |20.2 - 20.15| / 20.2 ) = 99.8% accuracy for P. Manning

Each player will be weighted equally for overall results.

When:

Projections will be obtained as close as possible to the commencement of the early Sunday games and all sites will be done as close together as possible. Screen prints will be taken.

Please note that it's the first attempt to analyze projections across sites and it's probably better to keep it as simple as possible to start off with. If this takes off and is a worthwhile endeavor then we can improve and complicate the model a little more in future.

Once we finalize the method we're running with, it will be locked in until the end of the season.

 
Last edited by a moderator:
Let me be the first to say that I hate the scoring compromise, Andy.

Each player/projection should be evaluated using the method shown in the P. Manning example (or one like it). Don't flip/flop between apples and oranges.

 
Last edited by a moderator:
Let me be the first to say that I hate the scoring compromise, Andy.Each player/projection should be evaluated using the method shown in the P. Manning example (or one like it). Don't flip/flop between apples and oranges.
I understand your concern davearm, I'd like other feedback to see where others stand.If a player is projected to finish as a top 4 QB and he finishes there, he's done his job. I know it's not perfect, but it's the best compromise that could be made.Whatever the final decision is, we'll stick with it, but we can look at how other scoring methods would have fared once the season is completed and there's no doubt it will probably change for future seasons.First we have to finalise the 2008 plan and once that's done, that's what we'll be basing the projection analysis on.
 
Let me be the first to say that I hate the scoring compromise, Andy.Each player/projection should be evaluated using the method shown in the P. Manning example (or one like it). Don't flip/flop between apples and oranges.
I understand your concern davearm, I'd like other feedback to see where others stand.If a player is projected to finish as a top 4 QB and he finishes there, he's done his job. I know it's not perfect, but it's the best compromise that could be made.Whatever the final decision is, we'll stick with it, but we can look at how other scoring methods would have fared once the season is completed and there's no doubt it will probably change for future seasons.First we have to finalise the 2008 plan and once that's done, that's what we'll be basing the projection analysis on.
Imagine the following projections for a given week:Site A projects:QB4: Peyton Manning 250 yds, 2 TDs = 18.0 FPTsQB5: Donovan McNabb 240 yds, 2 TDs = 17.6 FPTsFBGs projects:QB4: Donovan McNabb 260 yds, 2 TDs = 18.4 FPTsQB5: Peyton Manning 250 yds, 2 TDs = 18.0 FPTsBoth sites have the exact same projection for Manning, and the difference on McNabb is 20 yards.Actual stats finish as follows:QB1: Peyton Manning 400 yds, 5 TDs = 36.0 FPTsQB5: Donovan McNabb 250 yds, 2 TDs = 18.0 FPTsBy an objective measure, both sites did equally well with their projections -- both missed by just 10 yards on McNabb, and both missed bigtime (and by the same amount) on Manning. However...Site A scores as follows:Manning: 100% (predicted top 4, finished top 4)McNabb: 1-[(|18.0-17.6|)/17.6]= 98%Average: 99% accuracyFBG scores as follows:McNabb: 1-[(|18.0-18.4|)/18.4]= 98%Manning: 1-[(|36.0-18.0|)/18.0]= 0%Average: 49% accuracyFinal Tally:Site A: 99% accuracyFBG: 49% accuracyWould FBG agree with this conclusion if this was someone else's study?
 
Let me be the first to say that I hate the scoring compromise, Andy.Each player/projection should be evaluated using the method shown in the P. Manning example (or one like it). Don't flip/flop between apples and oranges.
I understand your concern davearm, I'd like other feedback to see where others stand.If a player is projected to finish as a top 4 QB and he finishes there, he's done his job. I know it's not perfect, but it's the best compromise that could be made.Whatever the final decision is, we'll stick with it, but we can look at how other scoring methods would have fared once the season is completed and there's no doubt it will probably change for future seasons.First we have to finalise the 2008 plan and once that's done, that's what we'll be basing the projection analysis on.
Andy.I'd agree with you - IF you were picking more than 12 QB's. IMHO, you've missed the mark here.Are you looking at the accuracy of your rankings or the accuracy of your projections? What are the FBG's trying to get out of this? A number you can hang your hat on and say you're better than site XYZ? It's a lose-lose situation. Damned if you do, damned if you don't.Personally, I'd be happy with FBG's worrying about their own projections. You analysis, insight and tools speak for themselves. Joel
 
Let me be the first to say that I hate the scoring compromise, Andy.

Each player/projection should be evaluated using the method shown in the P. Manning example (or one like it). Don't flip/flop between apples and oranges.
I understand your concern davearm, I'd like other feedback to see where others stand.If a player is projected to finish as a top 4 QB and he finishes there, he's done his job. I know it's not perfect, but it's the best compromise that could be made.

Whatever the final decision is, we'll stick with it, but we can look at how other scoring methods would have fared once the season is completed and there's no doubt it will probably change for future seasons.

First we have to finalise the 2008 plan and once that's done, that's what we'll be basing the projection analysis on.
Imagine the following projections for a given week:Site A projects:

QB4: Peyton Manning 250 yds, 2 TDs = 18.0 FPTs

QB5: Donovan McNabb 240 yds, 2 TDs = 17.6 FPTs

FBGs projects:

QB4: Donovan McNabb 260 yds, 2 TDs = 18.4 FPTs

QB5: Peyton Manning 250 yds, 2 TDs = 18.0 FPTs

Both sites have the exact same projection for Manning, and the difference on McNabb is 20 yards.

Actual stats finish as follows:

QB1: Peyton Manning 400 yds, 5 TDs = 36.0 FPTs

QB5: Donovan McNabb 250 yds, 2 TDs = 18.0 FPTs

By an objective measure, both sites did equally well with their projections -- both missed by just 10 yards on McNabb, and both missed bigtime (and by the same amount) on Manning. However...

Site A scores as follows:

Manning: 100% (predicted top 4, finished top 4)

McNabb: 1-[(|18.0-17.6|)/17.6]= 98%

Average: 99% accuracy

FBG scores as follows:

McNabb: 1-[(|18.0-18.4|)/18.4]= 98%

Manning: 1-[(|36.0-18.0|)/18.0]= 0%

Average: 49% accuracy

Final Tally:

Site A: 99% accuracy

FBG: 49% accuracy

Would FBG agree with this conclusion if this was someone else's study?
:kicksrock: :goodposting: :goodposting: Please do not pander to anyone who thinks rankings should be involved. JUST USE PROJECTIONS. Rankings are derived from projections anyway, so it's not like you're leaving anything out. (If Site XYZ projects Manning for 300/3/1 and projects Romo for 270/2/1, it's pretty obvious that they would rank Manning ahead of Romo.) There's no good reason to include rankings - it will only make your project more complicated and suspect, as the above post shows.

 
Let me be the first to say that I hate the scoring compromise, Andy.

Each player/projection should be evaluated using the method shown in the P. Manning example (or one like it). Don't flip/flop between apples and oranges.
I understand your concern davearm, I'd like other feedback to see where others stand.If a player is projected to finish as a top 4 QB and he finishes there, he's done his job. I know it's not perfect, but it's the best compromise that could be made.

Whatever the final decision is, we'll stick with it, but we can look at how other scoring methods would have fared once the season is completed and there's no doubt it will probably change for future seasons.

First we have to finalise the 2008 plan and once that's done, that's what we'll be basing the projection analysis on.
Andy.I'd agree with you - IF you were picking more than 12 QB's. IMHO, you've missed the mark here.

Are you looking at the accuracy of your rankings or the accuracy of your projections? What are the FBG's trying to get out of this? A number you can hang your hat on and say you're better than site XYZ? It's a lose-lose situation. Damned if you do, damned if you don't.

Personally, I'd be happy with FBG's worrying about their own projections. You analysis, insight and tools speak for themselves.

Joel
CalBear, They are picking more than 12 QB's. The method is "ANY QB listed in someone's Top 12 before that start of the games, ANY RB in someone's Top 24, ANY WR listed in someone's Top 30 and ANY TE listed in someone's Top 12."

I checked this using just 6 of the sites' projections for week 2, and came up with 21 different quarterbacks ranked in someone's top 12, and that was using less than what this will use. With over 10 independent group of projections, there will be some consensus (guys like Cutler, Romo, Brees will probably appear in everyone's top 12) but also a lot of differences in slots 10-12. Some will have Trent Edwards top 12, some will have Anderson, or Palmer, or Collins, or O'Sullivan, but others will not. All of these players would then be included. You might not include a few, like Russell, or Huard, or Green, or Flacco, who probably won't appear in anyone's top 12. Same at running back or wide receiver. There is no way that the sites are going to be in complete agreement. On a week when 6 teams are off for byes, lots of different second running backs will appear in someone's top 24 at RB, lots of different second wide receivers.

I would project that for this week, this methodology would result in the following numbers at each position:

QB-20

RB-36

WR-44

TE-18

TOTAL-118

I do agree with the concerns about the methodology and the specific example cited by davearm.

A second level issue is also how do you weight the positions. If all, let's say 120 projections for that week are weighted equally, then performance in wide receiver projection will be the most important factor in deciding who ranks high, especially as compared to QB. You have the potential for a Simpson's paradox.

 
I would be glad to see an analysis of internal FBG projection accuracy (as a function of position) as an initial starting point. The types of questions I want answered are:

1. Does Herman have better kicker projections than Dodds/Norton?

2. Does Bloom have better non-kicker/non-DST projections than Dodds/Norton? Perhaps Bloom is better on WRs, and Dodds/Norton is better at QBs? That would be valuable information to know.

Similarly, I would like to see a post-mortem analysis of intial yearly "Expert Ranking" forecasts as a function of position for each contributing staffer. Many leagues are won or lost based on the information in the yearly projections.

Comparison across sites is a nice stretch goal, but internal FBG analysis seems like the right starting point.

 

Users who are viewing this thread

Back
Top