What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

2006 FBG QB Predictions - updated with PPG (1 Viewer)

GoBears84

Footballguy
I've been trying to optimize the experts predictions for the Projections Dominator. I analyzed kickers this afternoon here and tackled QB's tonight.

ETA: Added in means and reevaluated results. Upon consulting with a certified statistician (which I'm not) we discovered the original analysis was incomplete.

I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I found last years PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006 and was able to open them in Excel.

I took QB Projections from Dodds, Henry, Smith, Tremblay and Woods and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). I ran the data analysis with as many predictions as possible and then again only with QB's that scored more than 100 points. There were many predictions where the QB was projected to score only a few points and did. This unfairly biases the data so I picked 100 to put significance to the residuals. Ironically, this worked out to be 32 QB's (not intended).

I then loaded the data into my statistical program (JMP 6.0.3) and did a oneway analysis of the data, assuming unequal variances of the predictions.

According to the analysis, the mean of the residual of the predictions were:

Level Mean Lower 95% Upper 95%

Dodds 31.28 2.698 59.865

Henry 30.25 1.666 58.834

Smith 27.13 -1.459 55.709

Tremblay 33.53 4.948 62.115

Woods 32.03 3.448 60.615

The positive average (actuals-predictions) shows that FBG's under projects on the top players. When ALL the data was included, (QB's 33 to 70), the mean was closer to 0.

The variances (the spread of the residuals) were as follows:

Dodds 82.0

Henry 82.1

Smith 85.2

Tremblay 77.5

Woods 82.2

These are interesting results. On average, Smith was the best. However, he had more variability in his results. Conversely, Tremblay was a little farther off on his predictions, BUT, the variance was less. By this analysis, Tremblay has more consistent predictions.

In English this means that Smith was closer on his predictions, but he had more that were incorrect (think of a target with a bunch in the bulls-eye and a bunch on the outer edge). Tremblay was a little further off, but more consistent (think of a target with a bunch just outside of the bulls-eye).

Most importantly, statistically, none of the predictions were any better than the others. Let me repeat: statistically there is no difference. Bear in mind that this number is an indication of accuracy over the entire year for all the QB's. I think it's impressive to see how close most of the predictions were.

I've still got RB's, WR's, TE's and D's to do, but if anybody has any input to my methods, I'd be happy to hear it.

Joel

 
Last edited by a moderator:
Joel,

This sounds fine, I think you did everything right.

I just wonder about significance in this case. (I don't trust my naked eye viewing of st devs--I would prefer seeing mean diffs for each person and the confidence limits around them.) How much better in terms of points was Tremblay over the whole season? You may have found a real but not terribly meaningful difference. Same with Smith being worse.

I wonder how many WRs made the 100-pt threshold? Might want to back that off a bit for them, as I would think the ability to predict WR3s would be pretty important overall.

Good on ya!

 
Good stuff, Joel. Thanks. I'm wondering if you change the sampling from top 32 to the top 12 (typical starters) or top 24 (typical rostered QBs, though that may be a bit higher in reality) if the data/variance changes.. Hey, I'm a stat geek at heart and I'm always interested in numbers and results.. the more the better.

Good work, thanks for sharing this with us.

 
Good stuff, Joel. Thanks. I'm wondering if you change the sampling from top 32 to the top 12 (typical starters) or top 24 (typical rostered QBs, though that may be a bit higher in reality) if the data/variance changes.. Hey, I'm a stat geek at heart and I'm always interested in numbers and results.. the more the better.Good work, thanks for sharing this with us.
Bob,Besides the top 12 and top 24 for QB's, what would you like to see for RB's, WR's and TE's? There is also the issue of approach: Does one analyze the top 24 from the start of year projections or top 24 from end of year? (EOY is much easier.)The analysis is easily done -- getting all of the data to match up and into a format where it can be analyzed is the hard part - but I've got a good start on it. :goodposting: Looking through the data I see some really cool things - for instance, the experts all pretty well nailed Brady, Favre, Rivers, Roethlisberger and Manning (7-11 by end of year stats) - indicating that they do better at that mid tier projection than they do at the top. Something to keep in mind when drafting. (Tremblay's predictions were off by 1, 22, 4, 3, and -10 respectively for these 5).I'll post more later.Joel
 
FWIW, I think there's a lot of complicated factors in analyzing whose projections are best. I've got a lot of doubt about any system, including this one. Doug Drinen uses one that I really like, but he admits that it's not perfect, either. Injuries throw a big wrench in here.

 
Joel,This sounds fine, I think you did everything right. I just wonder about significance in this case. (I don't trust my naked eye viewing of st devs--I would prefer seeing mean diffs for each person and the confidence limits around them.) How much better in terms of points was Tremblay over the whole season? You may have found a real but not terribly meaningful difference. Same with Smith being worse.I wonder how many WRs made the 100-pt threshold? Might want to back that off a bit for them, as I would think the ability to predict WR3s would be pretty important overall.Good on ya!
BD,I've got the data and will post it later. Regarding the threshold, I've only used 100 points for QB's, and only because it looked like a good spot to split the data (no statistical significance and it happened to match up with the top 32). Bob Henry suggests limiting to the top 12 or 24 QB's - which might be a better differentiator. Correspondingly, did you have some point threshold in mind for RB's, WR's and TE's?Joel
 
Good stuff -- thanks for the hard work. An interesting comparison might be to check the variation in player performance from 2005 to 2006 and compare to the projections by the FBG staff. Does the staff offer something better than past performance as an indicator for future performance?

 
Good info.

I think it may be useful to use points per game instead of EOY totals. Predictions are usually based on a player playing 16 games, so any player that missed any time may be off in total production, but less so on a ppg basis.

 
FWIW, I think there's a lot of complicated factors in analyzing whose projections are best. I've got a lot of doubt about any system, including this one. Doug Drinen uses one that I really like, but he admits that it's not perfect, either. Injuries throw a big wrench in here.
Greats points, Chase. There will almost always be flaws in using a system to track/compare projections or rankings. I'm not overly concerned about it though. In the end, I just like to see qualitative analysis.. :lmao: Most people here are smart enough to understand where the imperfections and flaws enter into the system via injuries, etc. but it is good to note that just the same.I don't think the intent is to call out anyone for being better/worse, just help us all understand where we can improve or where performed best. Maybe not.. that's just how I look at it.
 
FWIW, I think there's a lot of complicated factors in analyzing whose projections are best. I've got a lot of doubt about any system, including this one. Doug Drinen uses one that I really like, but he admits that it's not perfect, either. Injuries throw a big wrench in here.
I think that the impact of injury when analyzing predictions is inconsequential, and here's why. Most projections take some impact of injuries into account. This is obvious becuase you see predictions for back-up players. A minor injury which keeps a player out a game or two is factored into the predictions. Drew Brees was ranked lower last year because of concerns regarding his shoulder.A major injury, which takes a player out for a significant amount of time, will impact all projections equally - the players projection becomes an outlier and can be discarded. The data actually shows that the biggest cause of poor prediction results is -- poor predictions. The top QB's that were undervalued were: Romo, Young, Leinart, Huard, Harrington, Garrard, Grossman, Garcia, Campbell, and Brees. Of those, Huard, Harrington, Garcia and Garrard had playing time because of injuries. The rest moved into the starting job because the #1 QB underperformed - somebody predicted to do better than they actually did. Or in the case of Brees - just out performed the predictionsLooking at those on the bottom of the list we see Delhomme, Hasselbeck, Brunell, and Plummer - all who significantly underperformed, none of whom were significantly injured.While no analysis is perfect, some are statistically better than others.Joel
 
Last edited by a moderator:
I don't think the intent is to call out anyone for being better/worse, just help us all understand where we can improve or where performed best. Maybe not.. that's just how I look at it.
People have been asking for an indicator of how good the FBG's project. With the ability to weight projections in LD, people are also curious to know how to best apply the weights.All I am doing is looking at data. There are two things I clearly know to be true: 1) Kudos to the FBG's. The projections are all pretty close. While Tremblay may be a bit better for QB's, it's a matter of ~8 points over the course of a season per QB - or .5 pt/game. Using multiple projections in PD wil minimize significant misses.2) As my stock broker always says: Past perfromance is no guarantee of future performance. Use at your own risk.Joel
 
Good stuff -- thanks for the hard work. An interesting comparison might be to check the variation in player performance from 2005 to 2006 and compare to the projections by the FBG staff. Does the staff offer something better than past performance as an indicator for future performance?
I'll have to think about how to best do that - but the data is out there and I just need to figure out a means to correlate it.Joel
 
I'm all for anyone looking at these numbers objectively. :shrug: Thanks for putting in the effort. While I understand Chase's point that any ranking system is flawed in some way shape or form, if we're going to put our numbers out there for people to consume (for a fee no less), we have to be open to analysis. I'm hoping this may help shed some light on things I may consistently do poorly (or correctly) as I build up my projection models.

 
Jason Wood said:
I'm all for anyone looking at these numbers objectively. :lmao: Thanks for putting in the effort. While I understand Chase's point that any ranking system is flawed in some way shape or form, if we're going to put our numbers out there for people to consume (for a fee no less), we have to be open to analysis. I'm hoping this may help shed some light on things I may consistently do poorly (or correctly) as I build up my projection models.
Doing an incomplete analysis is also flawed :no: . I walked back through the results with our company statistician this afternoon. While she didn't disagree with my original assessment, by leaving out the means, it was incomplete and slightly changes the interpretation. I've modified the initial post above to be more correct. Thanks for all of the input. I'll work on RB's as soon as possible.Joel
 
Last edited by a moderator:
Were you using pts per season on pts per game?

To deal with the injury thing, should surely do pts per game.

 
Were you using pts per season on pts per game?

To deal with the injury thing, should surely do pts per game.
2006 FBG Quarterback Predictions, Statistical Analysis of AccuracyThis is additional analysis of the QB data. Has I’ve gone through the other positions I’ve refined my approach and wanted to update the QB data to show the PPG

Please note that the analysis method below is that recommended to me by the statistician at my company. We discussed it again today and we still feel it is appropriate. It is different than that recommended by others, but I’m happy to share the data with anybody else who would like to see it. Just PM me.

I calculated the Points Per Game (PG) for each QB based on the number of games claimed to have been played in the 2006.

From there I took PK Projections from Dodds, Henry, Smith, Tremblay and Wood and divided them by 16 and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). Then I squared the residuals.

I then sorted the data by PPG scored. It was immediately apparent that QB’s had many more statistical outliers than any other position. There were 5 QB’s who had significantly better performances than predicted: Cutler, Young, Campbell, Feeley, and Leinart. None of these guys were starters at the beginning of the season and weren’t expected to start as soon as they did. They significantly out performed all of the expert predictions. They were not included in the analysis.

Because QB’s are limited on the roster, I limited my analysis to the top 24, the top 12, and QB’s 13-24

The means of the Points per Game residual from the predictions were (minus data from the outliers):

…………..…………Top 24………Top 12………...13 to 24

Dodds PG…………..3.16…………..3.06…………..3.27

Henry PG…………..3.43…………..3.18…………..3.68

Smith PG…………..2.71…………..2.53…………..2.90

Tremblay PG…… ..3.18…………..4.03…………..1.81

Wodd PG…………..3.21…………..2.91…………..3.52

The variances were:

…………..…………Top 24………Top 12………...13 to 24

Dodds PG…………..3.98…………..3.43…………..4.49

Henry PG…………..4.53…………..3.37…………..5.31

Smith PG…………..3.25…………..3.05…………..3.50

Tremblay PG……….3.86…………..4.23…………..2.48

Wodd PG …………..4.35…………..3.14…………..5.11

The analysis indicates that while the final results are similar, the experts’ predictions varied over the range of results. Tremblay was off the most on the Top 12, but much better on 13-24. Most of this was due to Grossman – Tremblay’s prediction was only off by 1.5 PPG, where as everybody else was off at least 6 pts/game.

Smith was slightly better than everybody else and Henry/Wood were off the most, but not to an amount that is statistically significant.

What’s most interesting to me is that there were 6 QB’s who came off the bench to score significant points. In many leagues, these might be guys that are available on waivers. It also implies that there were 6 QB’s that underperformed.

Joel

 
Last edited by a moderator:

Users who are viewing this thread

Back
Top