GoBears84
Footballguy
This part 4 in an effort to analyze the predictions from 2006 Projections Dominator as compared to the years final results. So far we’ve looked at kickers (needs to be updated with PPG data), QB's and RB's. I still have TE’s and D’s to do, but they should be completed by Wed. I plan on putting the Top 200 together when done to see how it all plays out.
I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I am also using the 2006 PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006.
It’s a time consuming process to get all of the data to line-up, but I think that the data will be useful to the FBG’s community. With the standardization of this years PD I hope next year will be easier.
Please note, the analysis method below is that recommended to me by the statistician at my company. It is different than that recommended by ookook and Prussian. I’m sharing my dataset with them so that they can make their own analysis. I’m happy to share the data with anybody else who would like to see it. Just PM me.
I calculated the Points Per Game (PG) for each WR based on the number of games claimed to have been played in the 2006 actuals file that comes with PD (it's the last column of data).
From there I took WR Projections from Dodds, Henry, Smith, Tremblay and Wood, divided them by 16 and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). Then based on the suggestion of ookook, I squared the residuals.
Because there were 161 WR’s in the dataset, I limited my analysis to the top 60, the top 24, the mid 24 and the bottom 15.
The means of the Points per Game residual from the predictions were:
…………..…………Top 60………Top 24………...Mid 24……..Bottom 15
DoddsPG………….2.39………….1.87…………..2.64…………..2.69
Henry PG ……..…2.31………….1.85…………..2.60…………..2.70
Smith PG ………..2.40…………1.66…………..2.41…………..3.35
Tremblay PG……..2.32…………2.19…………..2.38…………..2.57
Wood PG ………..2.32…………1.81…………..2.42…………..2.82
The variances were:
…………..…………Top 60………Top 24………...Mid 24……..Bottom 15
DoddsPG…………..2.98…………2.40…………..3.00…………..3.40
Henry PG…………..2.74…………2.15…………..2.88…………..3.21
Smith PG…………..3.11…………1.92…………..2.76…………..3.99
Tremblay PG…..….2.70………...2.32…………..2.75…………..3.24
Wood PG ……..….2.74………….2.32…………..2.52…………..3.28
The data analysis comparing the means and the variances suggests that the experts were not statistically different (which I hope passes muster as a statement).
Looking at the data, it was clear that Furrey and Colston were significant outliers from the rest of the data – their predictions were off by so much that I omitted them from the Top 60 and Top 24 analysis.
What is most interesting is that the predictions of the top 24 WR’s are better than those of the mid-24 WR’s and nearly a full point better than those of the bottom 15. Compare this to the FBG RB predictions where the mid 24 RB predictions were better than that of the top 24 RB's.
You’ll also note that Smith was considerably off on his bottom 15 compared to the others. A glance through the raw data shows that he severely over-predicted Randy Moss by 7.4 points a game, where as Dodds, Henry, Tremblay and Wood were off by 6.16, 5.71, 4.75 and 5.83 respectively.
Keep in mind, there is no statistical variance between the experts, though some might appear to be better than others...
Joel
I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I am also using the 2006 PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006.
It’s a time consuming process to get all of the data to line-up, but I think that the data will be useful to the FBG’s community. With the standardization of this years PD I hope next year will be easier.
Please note, the analysis method below is that recommended to me by the statistician at my company. It is different than that recommended by ookook and Prussian. I’m sharing my dataset with them so that they can make their own analysis. I’m happy to share the data with anybody else who would like to see it. Just PM me.
I calculated the Points Per Game (PG) for each WR based on the number of games claimed to have been played in the 2006 actuals file that comes with PD (it's the last column of data).
From there I took WR Projections from Dodds, Henry, Smith, Tremblay and Wood, divided them by 16 and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). Then based on the suggestion of ookook, I squared the residuals.
Because there were 161 WR’s in the dataset, I limited my analysis to the top 60, the top 24, the mid 24 and the bottom 15.
The means of the Points per Game residual from the predictions were:
…………..…………Top 60………Top 24………...Mid 24……..Bottom 15
DoddsPG………….2.39………….1.87…………..2.64…………..2.69
Henry PG ……..…2.31………….1.85…………..2.60…………..2.70
Smith PG ………..2.40…………1.66…………..2.41…………..3.35
Tremblay PG……..2.32…………2.19…………..2.38…………..2.57
Wood PG ………..2.32…………1.81…………..2.42…………..2.82
The variances were:
…………..…………Top 60………Top 24………...Mid 24……..Bottom 15
DoddsPG…………..2.98…………2.40…………..3.00…………..3.40
Henry PG…………..2.74…………2.15…………..2.88…………..3.21
Smith PG…………..3.11…………1.92…………..2.76…………..3.99
Tremblay PG…..….2.70………...2.32…………..2.75…………..3.24
Wood PG ……..….2.74………….2.32…………..2.52…………..3.28
The data analysis comparing the means and the variances suggests that the experts were not statistically different (which I hope passes muster as a statement).
Looking at the data, it was clear that Furrey and Colston were significant outliers from the rest of the data – their predictions were off by so much that I omitted them from the Top 60 and Top 24 analysis.
What is most interesting is that the predictions of the top 24 WR’s are better than those of the mid-24 WR’s and nearly a full point better than those of the bottom 15. Compare this to the FBG RB predictions where the mid 24 RB predictions were better than that of the top 24 RB's.
You’ll also note that Smith was considerably off on his bottom 15 compared to the others. A glance through the raw data shows that he severely over-predicted Randy Moss by 7.4 points a game, where as Dodds, Henry, Tremblay and Wood were off by 6.16, 5.71, 4.75 and 5.83 respectively.
Keep in mind, there is no statistical variance between the experts, though some might appear to be better than others...
Joel