GoBears84
Footballguy
This took a little longer than expected...
This is the final summary of my attempt to analyze the predictions from 2006 Projections Dominator as compared to the year’s final results. The individual analysis by position can be found at the following links:
PK,
QB's,
RB's,
WR’s
TE’s and
Team D.
My analysis involved calculating the Points Per Game (PG) for each player based on the number of games claimed to have been played in the 2006.
From there I took Projections from Dodds, Henry, Smith, Tremblay and Wood for QB, RB, WR and TE and divided them by 16 and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). Then I squared the residuals.
I then sorted all the data by PPG scored. It was immediately apparent that QB’s had many more statistical outliers than any other position. There were 5 QB’s in the top 24 who had significantly better performances than predicted: Cutler, Young, Campbell, Feeley, and Leinart. None of these guys were starters at the beginning of the season and weren’t expected to start as soon as they did. They significantly out performed all of the expert predictions. They were not included in the analysis.
To balance the analysis I looked at the top 24 for QB’s, RB’s, WR’s and TE’s as individual averages and as one big pool. Since only Dodds and Smith predicted Team D’s and only Dodds, Herman, Smith predicted PK’s, those were kept out of the big analysis.
The means of the Points per Game residual from the predictions were (minus data from the outliers) for the top 24 of each position were.
Top 24…………..…..QB…………..RB…………...WR…………...TE……….…..Total
Dodds PG…………..3.16…………..4.87…………..1.87…………..1.41…………..11.30
Henry PG…………..3.43…………..4.78…………..1.85…………..1.62…………..11.67
Smith PG…………..2.71…………..4.85…………..1.66…………..1.76…………..10.97
Tremblay PG……....3.18…………..5.10…………..2.19…………..1.43…………..11.89
Wodd PG…………..3.21…………..4.71…………..1.81…………..1.54…………..11.27
The variances were:
Top 24……….……..QB…………..RB……….…..WR……….…..TE……….…..Total
Dodds PG…………..3.98…………..5.47…………..2.40…………..1.77…………..13.62
Henry PG…………..4.53…………..5.07…………..2.15…………..2.15…………..13.89
Smith PG…………..3.25…………..5.51…………..1.92…………..1.94…………..12.62
Tremblay PG…..…..3.86…………..5.62…………..2.32…………..1.77…………..13.57
Wodd PG…………..4.35…………..5.37…………..2.32…………..1.77…………..13.80
Looking across the prediction space, Smith is always on the lower end of the prediction mean and variance. All the other experts were pretty similar.
Pooling all the results into the top 100 (actually 96), the means of the residuals were:
Top 96………….......All
Dodds PG…………..3.32
Henry PG…………..3.39
Smith PG…………..3.19
Tremblay PG……....3.41
Wodd PG…………..3.41
And the variances were:
Top 96………….......All
Dodds PG…………..4.52
Henry PG…………..4.48
Smith PG…………..4.38
Tremblay PG….……4.54
Wodd PG…………..4.82
Again, Smith is slightly lower.
Our previous analysis of Team D’s, showed that the means and variance of the Points per Game residual from the predictions were:
Top 24 D’s..……Means……..Variance
Dodds …………..1.97…………..2.75
Smith …………..2.01…………..2.15
And for kickers, the means of the Points per Game residual from the predictions were (minus data for Gould who was an outlier):
Top 24 PK…..……Mean……...Variance
Dodds PG…………..0.89…………..0.94
Herman PG…………1.35…………..1.33
Smith PG…………..1.16…………..1.28
What does it all mean? My analysis have suggested that overall, the predictions of the experts are statistically similar and that the differences that do exist are minor and probably due to chance.
However, there are some differences and I think this is what make the PD so powerful. Instead of relying on just the projections of Dodds in DD, the weight can be spread across the experts. Therefore if someone misses badly on a prediction, the impact is not as significant.
I also think that there is a "group think" here. While the experts all claim to be independent thinkers, the fact is that all of the projections are fairly similar, with some outliers. This may be because they all start out from the same baseline (2005 EOY) or because they all read the same articles and boards and there are some built in biases to follow the leader. Best example: Willie Parker. Last year was his first full year as a starter, yet all the experts under projected him by 72 + 5 points. That's too close to be random variation.
Time will tell if one expert is consistently better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.
Joel
This is the final summary of my attempt to analyze the predictions from 2006 Projections Dominator as compared to the year’s final results. The individual analysis by position can be found at the following links:
PK,
QB's,
RB's,
WR’s
TE’s and
Team D.
My analysis involved calculating the Points Per Game (PG) for each player based on the number of games claimed to have been played in the 2006.
From there I took Projections from Dodds, Henry, Smith, Tremblay and Wood for QB, RB, WR and TE and divided them by 16 and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). Then I squared the residuals.
I then sorted all the data by PPG scored. It was immediately apparent that QB’s had many more statistical outliers than any other position. There were 5 QB’s in the top 24 who had significantly better performances than predicted: Cutler, Young, Campbell, Feeley, and Leinart. None of these guys were starters at the beginning of the season and weren’t expected to start as soon as they did. They significantly out performed all of the expert predictions. They were not included in the analysis.
To balance the analysis I looked at the top 24 for QB’s, RB’s, WR’s and TE’s as individual averages and as one big pool. Since only Dodds and Smith predicted Team D’s and only Dodds, Herman, Smith predicted PK’s, those were kept out of the big analysis.
The means of the Points per Game residual from the predictions were (minus data from the outliers) for the top 24 of each position were.
Top 24…………..…..QB…………..RB…………...WR…………...TE……….…..Total
Dodds PG…………..3.16…………..4.87…………..1.87…………..1.41…………..11.30
Henry PG…………..3.43…………..4.78…………..1.85…………..1.62…………..11.67
Smith PG…………..2.71…………..4.85…………..1.66…………..1.76…………..10.97
Tremblay PG……....3.18…………..5.10…………..2.19…………..1.43…………..11.89
Wodd PG…………..3.21…………..4.71…………..1.81…………..1.54…………..11.27
The variances were:
Top 24……….……..QB…………..RB……….…..WR……….…..TE……….…..Total
Dodds PG…………..3.98…………..5.47…………..2.40…………..1.77…………..13.62
Henry PG…………..4.53…………..5.07…………..2.15…………..2.15…………..13.89
Smith PG…………..3.25…………..5.51…………..1.92…………..1.94…………..12.62
Tremblay PG…..…..3.86…………..5.62…………..2.32…………..1.77…………..13.57
Wodd PG…………..4.35…………..5.37…………..2.32…………..1.77…………..13.80
Looking across the prediction space, Smith is always on the lower end of the prediction mean and variance. All the other experts were pretty similar.
Pooling all the results into the top 100 (actually 96), the means of the residuals were:
Top 96………….......All
Dodds PG…………..3.32
Henry PG…………..3.39
Smith PG…………..3.19
Tremblay PG……....3.41
Wodd PG…………..3.41
And the variances were:
Top 96………….......All
Dodds PG…………..4.52
Henry PG…………..4.48
Smith PG…………..4.38
Tremblay PG….……4.54
Wodd PG…………..4.82
Again, Smith is slightly lower.
Our previous analysis of Team D’s, showed that the means and variance of the Points per Game residual from the predictions were:
Top 24 D’s..……Means……..Variance
Dodds …………..1.97…………..2.75
Smith …………..2.01…………..2.15
And for kickers, the means of the Points per Game residual from the predictions were (minus data for Gould who was an outlier):
Top 24 PK…..……Mean……...Variance
Dodds PG…………..0.89…………..0.94
Herman PG…………1.35…………..1.33
Smith PG…………..1.16…………..1.28
What does it all mean? My analysis have suggested that overall, the predictions of the experts are statistically similar and that the differences that do exist are minor and probably due to chance.
However, there are some differences and I think this is what make the PD so powerful. Instead of relying on just the projections of Dodds in DD, the weight can be spread across the experts. Therefore if someone misses badly on a prediction, the impact is not as significant.
I also think that there is a "group think" here. While the experts all claim to be independent thinkers, the fact is that all of the projections are fairly similar, with some outliers. This may be because they all start out from the same baseline (2005 EOY) or because they all read the same articles and boards and there are some built in biases to follow the leader. Best example: Willie Parker. Last year was his first full year as a starter, yet all the experts under projected him by 72 + 5 points. That's too close to be random variation.
Time will tell if one expert is consistently better than another at certain positions. But right now we only have one year of data and as I've maintained, past performance is no guarantee of future projections.
Joel
Last edited by a moderator: