GoBears84
Footballguy
I've been trying to optimize the experts predictions for the Projections Dominator. I analyzed kickers this afternoon here and tackled QB's tonight.
ETA: Added in means and reevaluated results. Upon consulting with a certified statistician (which I'm not) we discovered the original analysis was incomplete.
I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I found last years PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006 and was able to open them in Excel.
I took QB Projections from Dodds, Henry, Smith, Tremblay and Woods and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). I ran the data analysis with as many predictions as possible and then again only with QB's that scored more than 100 points. There were many predictions where the QB was projected to score only a few points and did. This unfairly biases the data so I picked 100 to put significance to the residuals. Ironically, this worked out to be 32 QB's (not intended).
I then loaded the data into my statistical program (JMP 6.0.3) and did a oneway analysis of the data, assuming unequal variances of the predictions.
According to the analysis, the mean of the residual of the predictions were:
Level Mean Lower 95% Upper 95%
Dodds 31.28 2.698 59.865
Henry 30.25 1.666 58.834
Smith 27.13 -1.459 55.709
Tremblay 33.53 4.948 62.115
Woods 32.03 3.448 60.615
The positive average (actuals-predictions) shows that FBG's under projects on the top players. When ALL the data was included, (QB's 33 to 70), the mean was closer to 0.
The variances (the spread of the residuals) were as follows:
Dodds 82.0
Henry 82.1
Smith 85.2
Tremblay 77.5
Woods 82.2
These are interesting results. On average, Smith was the best. However, he had more variability in his results. Conversely, Tremblay was a little farther off on his predictions, BUT, the variance was less. By this analysis, Tremblay has more consistent predictions.
In English this means that Smith was closer on his predictions, but he had more that were incorrect (think of a target with a bunch in the bulls-eye and a bunch on the outer edge). Tremblay was a little further off, but more consistent (think of a target with a bunch just outside of the bulls-eye).
Most importantly, statistically, none of the predictions were any better than the others. Let me repeat: statistically there is no difference. Bear in mind that this number is an indication of accuracy over the entire year for all the QB's. I think it's impressive to see how close most of the predictions were.
I've still got RB's, WR's, TE's and D's to do, but if anybody has any input to my methods, I'd be happy to hear it.
Joel
ETA: Added in means and reevaluated results. Upon consulting with a certified statistician (which I'm not) we discovered the original analysis was incomplete.
I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I found last years PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006 and was able to open them in Excel.
I took QB Projections from Dodds, Henry, Smith, Tremblay and Woods and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). I ran the data analysis with as many predictions as possible and then again only with QB's that scored more than 100 points. There were many predictions where the QB was projected to score only a few points and did. This unfairly biases the data so I picked 100 to put significance to the residuals. Ironically, this worked out to be 32 QB's (not intended).
I then loaded the data into my statistical program (JMP 6.0.3) and did a oneway analysis of the data, assuming unequal variances of the predictions.
According to the analysis, the mean of the residual of the predictions were:
Level Mean Lower 95% Upper 95%
Dodds 31.28 2.698 59.865
Henry 30.25 1.666 58.834
Smith 27.13 -1.459 55.709
Tremblay 33.53 4.948 62.115
Woods 32.03 3.448 60.615
The positive average (actuals-predictions) shows that FBG's under projects on the top players. When ALL the data was included, (QB's 33 to 70), the mean was closer to 0.
The variances (the spread of the residuals) were as follows:
Dodds 82.0
Henry 82.1
Smith 85.2
Tremblay 77.5
Woods 82.2
These are interesting results. On average, Smith was the best. However, he had more variability in his results. Conversely, Tremblay was a little farther off on his predictions, BUT, the variance was less. By this analysis, Tremblay has more consistent predictions.
In English this means that Smith was closer on his predictions, but he had more that were incorrect (think of a target with a bunch in the bulls-eye and a bunch on the outer edge). Tremblay was a little further off, but more consistent (think of a target with a bunch just outside of the bulls-eye).
Most importantly, statistically, none of the predictions were any better than the others. Let me repeat: statistically there is no difference. Bear in mind that this number is an indication of accuracy over the entire year for all the QB's. I think it's impressive to see how close most of the predictions were.
I've still got RB's, WR's, TE's and D's to do, but if anybody has any input to my methods, I'd be happy to hear it.
Joel
Last edited by a moderator: