What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Comparing Weekly Projection accuracy (1 Viewer)

supersecretid

Footballguy
Ok, FBG kicked this idea around at one point, but I'm actually going to follow through on it and will be posting the results on a weekly basis. I'm going to compare a laundry list of projection sites to each other throughout the season to see who provides the most accurate results. I have a few categories of methodology to clear up:

Player Universe - the players used to compare the sites

1. Use the top xx players at each position in actual performance for that week (Top 20 QB, Top 50 WRs, etc)

The problem with this is players who totally bust don't end up in the set of players that get used for comparison (e.g. I rank Brees 2nd, he ends up 22nd. This wouldn't matter because he is not in the comparison list).

2. Sites are compared only on the players they rank in the top xx at each position (Top 20 QB, Top 50 WRs, etc)

The problem with this is there's no penalty for missing a guy who has a big game (e.g. I rank Brees 22nd at QB in Week 1, he ends up 2nd. This wouldn't matter because he wasn't in my comparison list). It also ranks each site on a different set of players which could be perceived as unfair.

3. A player who is top xx in ANY of the rankings being compared goes onto the list of player who will be compared (Top 15 QB, Top 30 WR, etc)

So a player would only need to be listed by ANY site and then every site's ranking for that player will be looked at. Potential problems with this arise if some site doesn't give any rankings for another site's #30 WR (probable solution is to just not include that player in the analysis for that site). I just recently added this idea, so there may be other drawbacks I'm not thinking of as well. It is possibly my favorite thus far.

4. Use the top xx player at each position for the whole season

I don't really like this idea because it ignores projecting week-to-week surprises, is very clunky for the first few weeks, and again clunky when major players are out due to injury.

Comparison

Once I pick out the players being analyzed, I need to figure out how to compare them.

The simplest answer is |projected fantasy points - actual fantasy points| = error. Smallest total error is best.

Alternatively it could be based on percentage |projected points - actual points| / projected points.

I could also do a traditional correlation coefficient which is less intuitive for the general public, but would likely be considered credible.

I generally want to stay away from anything basked on rank order as I don't think it will be as precise.

Participants

Sites will be included with or without their permission. Their actual projections will not be posted at any time (I will rely on readers who subscribe to those sites as well as archived screenshots for credibility). Here is my list thus far:

FBG (Dodds)

FBG (Bloom)

ESPN

Yahoo

CBS

Fantasy Guru

Fantasy Index

The Huddle

KFFL (Free)

Fantasy Sharks (Free)

Draft Sharks

Fanball

FF Toolbox

FF Today (Free)

FF Mastermind

FF Docs

Fantasy Sports Central

Talented Mr. Roto

If a site will not provide a free subscription to be ranked they may or may not be included (and a note for the reason will be made). I can afford to purchase a couple, but can't pay for 12 different sites for this.

Time of data capture

Rankings are updated and modified throughout the week. I may do an early and late version (Wednesday night and Saturday). If I don't do multiple versions, the data capture will probably take place Saturday afternoon. Obviously some information will come out after that, but sites should be on a pretty level playing field by that time.

---------------------------------------------------------

Some other quick notes:

I'm not particularly interested in weighting certain players more than others. Some think ranking the studs consistently is important, others think ranking waiver wire guys is what matters, you can't please everyone. I also think your ability to accurately project from top-to-bottom is most indicative of your skill and reliability.

I'm also not interested in comparing rank order. It's not particularly precise, and again, the best projector will likely give the best ranking order over the long run as well.

In other words, I want to try to stick to making this a very clean simple system for seeing who is best at projecting performance. The practical applications of this can be decided by the person looking at the data.

So, that's all. For every 20 responses there are probably 20 different opinions. Whatever system is used won't be perfect, but also shouldn't be able to show any favoritism to one site over another and thus could be considered fair.

EDIT: If you would be interested in helping out in any way, we could use it. Shoot me a PM.

Here are some things that would especially help:

*Any database experience - Long-term I'd like to have a database of this stuff as it would be the most powerful way to report on the data in the future

*Data-scraping experience - If you have experience grabbing data off the web whether it's via script, Excel, whatever, let me know. I've done some *Excel web queries, but I'm not great with them.

*Subscriptions - I don't want to violate sites' rules, but footing the bill for 10 different pay sites adds up (I will do it if I need to). If you have any subscriptions we could probably work that out while still respecting the involved sites and their information.

*Manual data-scraping - If it comes down to copy/pasting stuff into Excel, having a handful of people to grab and format the data to just be dumped into my master spreadsheet would be HUGE.

*Analysis - ...the original point of my thread. The more minds thinking about how to work this out the better

*Anything else - I'm sure there's tons of other things that i haven't even thought of.

 
Last edited by a moderator:
:mellow:

to set your Universe, I'd use an ADP and top 20 QB's, top 40 RB's, top 40 WR's, top 20 TE's, top 20 D's. Track those individual guys throughout. some guys will drop out due to injury/suckiness, but that's ok.

 
Great idea. I applaud you for trying this. Many will argue the sample size is too small to be relevant, but you actually might get a sizable body of data by tracking 60+ players for each site over 16+ weeks.

For player universe, I agree that Option 3 seems strong. If one site does not have any projection for a player that another site has in the top XX, then you could give the non-ranking site a 0/0/0 projection for that player. Alternatively, you could give that player a projection equal to the lowest projection that site gave any other player in the same position. I guess the answer depends on how much you want to penalize sites for missing a breakout player, and thus reward other sites for correctly picking a breakout. I'd probably lean toward giving 0/0/0, because if you give the lowest projection, then you're advantaging those sites that only give a handful of projections (i.e., advantaging the site that only projects the top-20 at each position because its "floor" is higher). If Option 3 is too much work, then Option 2 seems like the best alternate.

For the comparison metric, why not use all three options you list? I'm sure you're going to do this with an Excel spreadsheet, so once you code in the standard calculation for each approach, the software will do the work for you. If you have three different comparison methods, you can get a better sense which is most meaningful once the data is being processed. I suspect all three will yield similar results, but if there is an aberration, then that might be interesting to investigate further. If you're forced to choose among the three, I'd pick either percentage comparison or correlation coefficient. I agree with avoiding rank order -- that's not very useful.

For sites to consider, I agree with your list. I think it also would be interesting to test the "wisdom of crowds" theory by averaging the projections of all the participants together, and seeing how that averaged projection compares to actual results. Having this averaged projection might also be a useful comparison metric, since each site could be evaluated for how much better or worse it performed than the average. I'd tend to bet that the averaged projection finishes the season with a better record than most of the individual sites.

Great project! Good luck.

 
:bag:

Good to see someone intends to follow through on this. Hopefully if you don't, you'll explain why (unlike some other threads where the wall of silence is deafening).

ETA:

Might want to check this thread for suggestions on how to set your criteria.

 
Last edited by a moderator:
I think an interesting way to compare a top 20 list to a top 20 results would be to map each player onto a unique character and use a string similarity metric. The benefits of this would be that there are string similarity metrics that have already been implemented ( http://secondstring.sourceforge.net/ ), and you can adjust the various penalties so it will better give you a score that is relevant to fantasy football.

 
I would also lend my voice to the fact that you should compare projected stats (which I think all of the sites mentioned provide). Then the rankings and FP can be customized to individual leagues. You could probably distribute the stat tracking to FBG volunteers, mock up a spreadsheet format and then load it into whatever statistical analysis you choose. I would also be willing to participate as I tend NOT to sit around figuring out my own projections but do want to know who is most accurate.

 
moleculo and PolishNorbi, PM incoming. Thank you. If anyone else would like to help, I'll gladly take the assistance. I wish I could've started this sooner, but my day job is also fantasy-related as well and the pre-season is the busiest time. If anyone has data-scraping or database experience, that could really make a big difference though I'm content to do it semi-manually (copy/paste/excel/etc) if I have to.

rpfote, I'll read up on this. I'm definitely intrigued by some of the programming techniques I've heard about recently for analyzing data sets.

 
moleculo and PolishNorbi, PM incoming. Thank you. If anyone else would like to help, I'll gladly take the assistance. I wish I could've started this sooner, but my day job is also fantasy-related as well and the pre-season is the busiest time. If anyone has data-scraping or database experience, that could really make a big difference though I'm content to do it semi-manually (copy/paste/excel/etc) if I have to.rpfote, I'll read up on this. I'm definitely intrigued by some of the programming techniques I've heard about recently for analyzing data sets.
Have you asked any of the sites if they have an API developed for their data?
 
moleculo and PolishNorbi, PM incoming. Thank you. If anyone else would like to help, I'll gladly take the assistance. I wish I could've started this sooner, but my day job is also fantasy-related as well and the pre-season is the busiest time. If anyone has data-scraping or database experience, that could really make a big difference though I'm content to do it semi-manually (copy/paste/excel/etc) if I have to.rpfote, I'll read up on this. I'm definitely intrigued by some of the programming techniques I've heard about recently for analyzing data sets.
Have you asked any of the sites if they have an API developed for their data?
I doubt most have, but you don't need it, you can steal tables using excel as long as they follow a standard format for the webpage and standard format for their listings should be really easy. (my best help is in excel... can't do much with databases though).
 
Yeah my guess is most sites aren't too concerned about an API, but hopefully we can get the process of grabbing the data figured out pretty easily anyway.

 
I think an interesting way to compare a top 20 list to a top 20 results would be to map each player onto a unique character and use a string similarity metric. The benefits of this would be that there are string similarity metrics that have already been implemented ( http://secondstring.sourceforge.net/ ), and you can adjust the various penalties so it will better give you a score that is relevant to fantasy football.
wat
 
A restricted range of data will inevitably lower your reliability and validity of these results. That's a complicated way of saying that all the effort you'd put into getting it done will still yield data that you can't put much faith in.

There are other sites/persons who have already undertaken this project and post results for free. See everyone's favorite librarian for an example:

http://www.fflibrarian.com/2009/02/accurac...s-for-2008.html

One might interpret the results to say "fantazzle is the best site to get accurate predictions" but it would take many more years of awesome augury to prove with statistical significance that fantazzle's predictions weren't due to chance.

Basically, it's a fun exercise for one year bragging rights, that's it.

 
A restricted range of data will inevitably lower your reliability and validity of these results. That's a complicated way of saying that all the effort you'd put into getting it done will still yield data that you can't put much faith in.

There are other sites/persons who have already undertaken this project and post results for free. See everyone's favorite librarian for an example:

http://www.fflibrarian.com/2009/02/accurac...s-for-2008.html

One might interpret the results to say "fantazzle is the best site to get accurate predictions" but it would take many more years of awesome augury to prove with statistical significance that fantazzle's predictions weren't due to chance.

Basically, it's a fun exercise for one year bragging rights, that's it.
That fflibrarian study is looking at a whole different question: the accuracy of predraft rankings.This endeavor would measure the accuracy of week to week projections.

Anyway, for anyone still interested, I've done some messing around -- developed a method, acquired the data off the web, cranked out the computations, and tabulated the results.

I used all the websites I could get data from: I subscribe to FBGs, so I used Dodds and Bloom's projections; I have leagues in CBS and ESPN; and FantasySharks and FFToday have free content on their sites.

I'm using the following scoring:

passing yards = 1/25

passing TDs = 4

passing INT = 0

rushing yards = 1/10

rushing TDs = 6

receiving yards = 1/10

receiving TDs = 6

I believe this is the FBG standard but I haven't checked it out to confirm. Maybe they subtract for INTs but it'd be a minor difference. I've set this up as a user input so it's easily changed.

I measured forecast error in the way outlined in the original post, namely %Error = |forecast - actual| / forecast

For now, I'm restricting the universe to each site's own weekly top 20 QBs, 50 RBs, 50 WRs, and 20 TEs. This is also a user input that can be tweaked and updated instantly.

Each site's performance was measured using median forecast error and mean forecast error, by week and by position, and also across all weeks and all positions.

Onto the results through the first three weeks of 2009...

First Place: Sigmund Bloom (median error of 48% / mean error of 53%)

Second: David Dodds (49% / 55%)

Third: CBS (50% / 56%)

Fourth (tie): ESPN and FFToday (both 54% / 59%)

Last: FantasySharks (55% / 62%).

I'd be happy to expand the analysis to include additional sites if someone wants to get me the data, or access to the content.

I plan to track this throughout the year to see how things evolve.

Hope some of you find this interesting. :stalker:

 
I think that these studies are pointing out that it's been a rough 1st three weeks for projections. If you look at last year's week 3 article of that Analyzing the Experts study I posted, it seems like the projections were doing a bit better for sure.

http://www.fftoday.com/articles/nestrick/07_ate_contest3.htm

They use a different methodology this year, but you can see that most of the "experts" killed it in week three last year where as this year, not so much.

 
davearm said:
Hope some of you find this interesting. :confused:
Sure it's entertaining. Just know that one man's weekly predictions aren't predictive of the next week's. Or the next year's. Or the year after that.Pardon the analogy but do you chase after hot stock mutual fund managers who have amazing one quarter or one year returns? It takes many, many years to truly evaluate if the mutual fund manager is a terrific stock picker or if his fund is performing well due to random chance. Evaluating FF experts is fun. Period. Results will vary widely from year to year. A harder exercise would be to quantify how the experts teach you different ways to approach FF problems with novel solutions e.g. The Perfect Draft or Matt Waldman's creative draft strategies, not figuring out how individual players might do well each week. That's the best part of reading expert's opinions.
 
davearm said:
Hope some of you find this interesting. :yes:
Sure it's entertaining. Just know that one man's weekly predictions aren't predictive of the next week's. Or the next year's. Or the year after that.Pardon the analogy but do you chase after hot stock mutual fund managers who have amazing one quarter or one year returns? It takes many, many years to truly evaluate if the mutual fund manager is a terrific stock picker or if his fund is performing well due to random chance. Evaluating FF experts is fun. Period. Results will vary widely from year to year. A harder exercise would be to quantify how the experts teach you different ways to approach FF problems with novel solutions e.g. The Perfect Draft or Matt Waldman's creative draft strategies, not figuring out how individual players might do well each week. That's the best part of reading expert's opinions.
I'll have to respectfully disagree with your position.In a nutshell, what you're arguing is that the sample size we're working with here is too small to yield statistically significant results.Realize that at this point in the season, each forecaster is being judged on over 400 datapoints (140 player forecasts per week x 3 weeks). Presuming I carry the analysis through to the end of the year, each participant will have over 2300 datapoints (140 x 17) on which to be judged.That should be more than enough to draw meaningful conclusions about how these experts' forecasting skills/methods measure up.To use your mutual fund manager analogy, 20 years' worth of quarterly performance figures would yield 80 datapoints.
 
I think supersecretid is still planning to do this. I know he was gathering the data and wasn't planning to start looking at results until week 4 or 5 anyway. I'll email him to let him know that people here are interested.

Also wanted to second the post arguing that there will be enough data points. That's exactly right...by rating a large number of players every week there are enough data points to get a VERY reliable result. In one season, there will be more data points than you'd get in looking at 10 years of full season projections.

 
I think supersecretid is still planning to do this. I know he was gathering the data and wasn't planning to start looking at results until week 4 or 5 anyway. I'll email him to let him know that people here are interested.Also wanted to second the post arguing that there will be enough data points. That's exactly right...by rating a large number of players every week there are enough data points to get a VERY reliable result. In one season, there will be more data points than you'd get in looking at 10 years of full season projections.
Be careful what you extrapolate from this analysis. Sure, you have 140 data points each week, but they each represent something different and you're averaging them all together so you're losing any meaning they might have. Maybe one guy is really good at projecting the top 5 RBs for the week, but terrible at projecting WRs 12-24, while another guy is really good at projecting the #2 WRs but horrible at starting QBs. 140 different data points like this is not the same as if they each projected, say, Ray Rice's production for 140 weeks in a row. Or if they each projected the #1 RB for week 4 for 140 years in a row. So if you do all this work, what do you conclude from these "VERY reliable" results? The sheer number of data points doesn't solely determine the usefulness of the results.First Place: Sigmund Bloom (median error of 48% / mean error of 53%)Second: David Dodds (49% / 55%)Third: CBS (50% / 56%)Fourth (tie): ESPN and FFToday (both 54% / 59%)Last: FantasySharks (55% / 62%).I see a bunch of numbers and nothing useful, unfortunately. Besides, if you're going to go ahead with the analysis anyway, the % error may not be the optimal way to score these. I mean, if I project a guy to score 18 points and he scores 9 (50% error), is that really the same as if I project a guy to score 6 points and he scores 3 (50% error)? Or on the other hand, if I project a guy to score 18 points and he scores 17 (~5.5% error), is that really three times better than if I project a guy to score 6 points and he scores 5 (~16.7% error)? Or let's say Dodds projects four of the top five RBs exactly right, but the fifth one he is completely wrong about, while Bloom's projections for each of the top 5 RBs are off by about 20%. Which one did better? Which set of projections is more useful for lineup-setting/waiver-picking/trade-analyzing? That is an important question you have to answer before you even bother crunching any numbers. Would you rather have projections that are pretty close, but never exactly right, for every player, or would you rather have projections that are exactly right for most players but completely wrong about the rest? And if you can come up with a convincing answer to that question, how do you factor that into your analysis?Count me in the club that thinks this might be entertaining but not at all useful. And I do stats for a living.
 
I think supersecretid is still planning to do this. I know he was gathering the data and wasn't planning to start looking at results until week 4 or 5 anyway. I'll email him to let him know that people here are interested.Also wanted to second the post arguing that there will be enough data points. That's exactly right...by rating a large number of players every week there are enough data points to get a VERY reliable result. In one season, there will be more data points than you'd get in looking at 10 years of full season projections.
Be careful what you extrapolate from this analysis. Sure, you have 140 data points each week, but they each represent something different and you're averaging them all together so you're losing any meaning they might have. Maybe one guy is really good at projecting the top 5 RBs for the week, but terrible at projecting WRs 12-24, while another guy is really good at projecting the #2 WRs but horrible at starting QBs. 140 different data points like this is not the same as if they each projected, say, Ray Rice's production for 140 weeks in a row. Or if they each projected the #1 RB for week 4 for 140 years in a row. So if you do all this work, what do you conclude from these "VERY reliable" results? The sheer number of data points doesn't solely determine the usefulness of the results.First Place: Sigmund Bloom (median error of 48% / mean error of 53%)Second: David Dodds (49% / 55%)Third: CBS (50% / 56%)Fourth (tie): ESPN and FFToday (both 54% / 59%)Last: FantasySharks (55% / 62%).I see a bunch of numbers and nothing useful, unfortunately. Besides, if you're going to go ahead with the analysis anyway, the % error may not be the optimal way to score these. I mean, if I project a guy to score 18 points and he scores 9 (50% error), is that really the same as if I project a guy to score 6 points and he scores 3 (50% error)? Or on the other hand, if I project a guy to score 18 points and he scores 17 (~5.5% error), is that really three times better than if I project a guy to score 6 points and he scores 5 (~16.7% error)? Or let's say Dodds projects four of the top five RBs exactly right, but the fifth one he is completely wrong about, while Bloom's projections for each of the top 5 RBs are off by about 20%. Which one did better? Which set of projections is more useful for lineup-setting/waiver-picking/trade-analyzing? That is an important question you have to answer before you even bother crunching any numbers. Would you rather have projections that are pretty close, but never exactly right, for every player, or would you rather have projections that are exactly right for most players but completely wrong about the rest? And if you can come up with a convincing answer to that question, how do you factor that into your analysis?Count me in the club that thinks this might be entertaining but not at all useful. And I do stats for a living.
All good points, and IMO the flaws you point out in the current method are valid.I'm still playing around with the performance measures, and experimenting with a bunch of different ones. In addition to the mean and median percent error numbers I showed above, MFE, MAE, MAPE, MSE, and RMSE are all metrics I've coded up and observed. Each shows some promise, and each has inherent advantages and disadvantages.This is very much a work in progress so if you can lend your stats expertise to improving the analysis I'd be appreciative. The criticism is fine, but better alternative approaches would be even more useful. :thumbsup:
 
I think supersecretid is still planning to do this. I know he was gathering the data and wasn't planning to start looking at results until week 4 or 5 anyway. I'll email him to let him know that people here are interested.Also wanted to second the post arguing that there will be enough data points. That's exactly right...by rating a large number of players every week there are enough data points to get a VERY reliable result. In one season, there will be more data points than you'd get in looking at 10 years of full season projections.
Be careful what you extrapolate from this analysis. Sure, you have 140 data points each week, but they each represent something different and you're averaging them all together so you're losing any meaning they might have. Maybe one guy is really good at projecting the top 5 RBs for the week, but terrible at projecting WRs 12-24, while another guy is really good at projecting the #2 WRs but horrible at starting QBs. 140 different data points like this is not the same as if they each projected, say, Ray Rice's production for 140 weeks in a row. Or if they each projected the #1 RB for week 4 for 140 years in a row. So if you do all this work, what do you conclude from these "VERY reliable" results? The sheer number of data points doesn't solely determine the usefulness of the results.First Place: Sigmund Bloom (median error of 48% / mean error of 53%)Second: David Dodds (49% / 55%)Third: CBS (50% / 56%)Fourth (tie): ESPN and FFToday (both 54% / 59%)Last: FantasySharks (55% / 62%).I see a bunch of numbers and nothing useful, unfortunately. Besides, if you're going to go ahead with the analysis anyway, the % error may not be the optimal way to score these. I mean, if I project a guy to score 18 points and he scores 9 (50% error), is that really the same as if I project a guy to score 6 points and he scores 3 (50% error)? Or on the other hand, if I project a guy to score 18 points and he scores 17 (~5.5% error), is that really three times better than if I project a guy to score 6 points and he scores 5 (~16.7% error)? Or let's say Dodds projects four of the top five RBs exactly right, but the fifth one he is completely wrong about, while Bloom's projections for each of the top 5 RBs are off by about 20%. Which one did better? Which set of projections is more useful for lineup-setting/waiver-picking/trade-analyzing? That is an important question you have to answer before you even bother crunching any numbers. Would you rather have projections that are pretty close, but never exactly right, for every player, or would you rather have projections that are exactly right for most players but completely wrong about the rest? And if you can come up with a convincing answer to that question, how do you factor that into your analysis?Count me in the club that thinks this might be entertaining but not at all useful. And I do stats for a living.
All good points, and IMO the flaws you point out in the current method are valid.I'm still playing around with the performance measures, and experimenting with a bunch of different ones. In addition to the mean and median percent error numbers I showed above, MFE, MAE, MAPE, MSE, and RMSE are all metrics I've coded up and observed. Each shows some promise, and each has inherent advantages and disadvantages.This is very much a work in progress so if you can lend your stats expertise to improving the analysis I'd be appreciative. The criticism is fine, but better alternative approaches would be even more useful. :thumbsup:
I didn't mean to be critical of you or anything, I was just pointing out the problems with this type of analysis, and responding to the guy that said there are enough data points to get a very reliable result.The point isn't that you're doing it wrong. The point is that it can't really be done. I've thought about doing something similar in the past but after thinking about it you come to the conclusion that no matter which way you try to do the analysis the results will be mostly meaningless. I can't offer a better alternative approach because I don't think there is one.
 
The cumulative leaders so far in the FFToday "Analyzing The Experts" column by D.J. Nestrick: http://www.fftoday.com/articles/nestrick/09_ate_wk5.htm

FF Toolbox 416

Football Guys 406

Rotoworld 406

Fox 404

FF Cafe 400

CBS 398

FF Sharks 398

NFL 397

Yahoo 396

AOL 396

FF Today 395

The Huddle 394

KFFL 388

ESPN 369

I can't vouch for the methodology accuracy or usefulness but just seeing ESPN at the bottom is worth a few chuckles.

BTW, this article is pairing these FF sites in head to head competitions for prediction accuracy and right now FBG's has a 4-1 record. The head to head aspect is an entertaining way to deliver the info but the useful info is really the overall score.

I'd like to see a breakout of how each site grades out for each fantasy position (QB, RB, WR, TE, K and DEF) since some people have a better handle on various positions than others.

 
The cumulative leaders so far in the FFToday "Analyzing The Experts" column by D.J. Nestrick: http://www.fftoday.com/articles/nestrick/09_ate_wk5.htm

FF Toolbox 416

Football Guys 406

Rotoworld 406

Fox 404

FF Cafe 400

CBS 398

FF Sharks 398

NFL 397

Yahoo 396

AOL 396

FF Today 395

The Huddle 394

KFFL 388

ESPN 369

I can't vouch for the methodology accuracy or usefulness but just seeing ESPN at the bottom is worth a few chuckles.

BTW, this article is pairing these FF sites in head to head competitions for prediction accuracy and right now FBG's has a 4-1 record. The head to head aspect is an entertaining way to deliver the info but the useful info is really the overall score.

I'd like to see a breakout of how each site grades out for each fantasy position (QB, RB, WR, TE, K and DEF) since some people have a better handle on various positions than others.
Are these scores based on weekly rankings or weekly projections (yds, TDs)?
 
Just thought I'd bump this thread to share my findings now that I have ten weeks of data to work with.

What I've done is gather together the weekly projections from 7 "expert" sources, and compared them against the weekly actual points scored. The sources I'm using are FBGs Dodds and Bloom, along with FFToday, CBS, ESPN, CNNSI, and Fantasy Sharks.

The scoring I'm using is pass yard = .04, pass TD = 4, INT = -1, rush/rec yard = 0.1, rush/rec TD = 6, zero PPR.

I'm limiting the analysis scope to each source's top 20 QBs, 40 RBs, 50 WRs, and 20 TEs in each week. Thus after 10 weeks, each source is being graded using 1300 datapoints: 10x(20+40+50+20).

Each source's projection accuracy is measured using six different error measures: Mean Forecast Error, Mean Absolute Error, Mean Percentage Error, Mean Absolute Percentage Error, Mean Squared Error, and Root Mean Squared Error. Then each source is ranked on each of these error measures, 1 through 7. Finally, I take the average rank across the six measure to get the source's "final" score. This ranking analysis is performed by position, and overall.

Onto the results:

Dodds and Bloom are the two most accurate predictors, with Dodds having the edge over Bloom.

FF Today and CBS are more or less equal as the next-tier predictors.

ESPN, Fantasy Sharks, and CNNSI are clearly underperforming relative to the four sources above.

The overall (non-position specific) rankings are:

Dodds 1.3

Bloom 2.2

FFToday 3.5

CBS 3.8

ESPN 5.2

Sharks 5.5

CNNSI 6.5

A positional breakdown shows that Dodds excels at the WR and TE positions; Dodds and Bloom are equally good atop the RB position, and Bloom outperforms Dodds and the rest at the QB spot.

QB

Bloom 1.5

Dodds 3.3

FFToday 3.5

CBS 3.7

CNNSI 4.5

ESPN 5.3

Sharks 6.2

RB

Bloom 1.7

Dodds 1.7

FFToday 3.2

CBS 4.3

ESPN 5.3

Sharks 5.5

CNNSI 6.3

WR

Dodds 1

Bloom 2.2

CBS 3.5

FFToday 4.2

Sharks 5.3

ESPN 5.7

CNNSI 6.2

TE

Dodds 1.3

Bloom 2.2

ESPN 3.7

CNNSI 4.3

FFToday 5

CBS 5.5

Sharks 6

One thought I had as to why the FBGs are dominating this thing is that they are the only source to generate fractional TD forecasts. All of the other sources project TDs in whole numbers. Dodds gets a further boost because he is the only source to update his projections as the week progresses, so he's able to incorporate late-week information such as injury updates into his numbers.

Anyway, hope some folks find this useful.

 
Just thought I'd bump this thread to share my findings now that I have ten weeks of data to work with.
Wow! Great work! Thanks!And good news on who's in the lead! I guess we're at the right place!(Now, I just wish I could get this info for IDP positions...)Eph
 

Users who are viewing this thread

Back
Top