What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

2006 FBG RB Predictions (1 Viewer)

GoBears84

Footballguy
I've been trying to optimize the experts predictions for the Projections Dominator. I analyzed kickers and QB's yesterday (here) and looked at RB's tonight.

I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I found last years PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006 and was able to open them in Excel.

I took RB Projections from Dodds, Henry, Smith, Tremblay and Wood and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). I ran the data analysis with as many predictions as possible and then again only with only the top 48 and then top 24 RB's based on 2006 actuals.

I then loaded the data into my statistical program (JMP 6.0.3) and did a oneway analysis of the data, assuming unequal variances of the predictions.

As expected, analyzing all of the data showed only minor differences.

According to the analysis, the mean of the residual of the predictions for the top 48 were:

Level Mean Lower 95% Upper 95%

Dodds 17.56 -0.569 35.694

Henry 16.9167 -1.215 35.048

Smith 17.0208 -1.111 35.152

Tremblay 24 5.868 42.132

Wood 12.7292 -5.402 30.861

The variances (the spread of the residuals) were as follows:

Level Std Dev

Dodds 64.20178

Henry 61.82537

Smith 67.34272

Tremblay 64.93434

Wood 60.27057

For the top 24, the mean of the residual of the predictions were:

Level Mean Lower 95% Upper 95%

Dodds 49.125 26.766 71.484

Henry 48.125 25.766 70.484

Smith 45.75 11.288 23.391 68.109

Tremblay 54.0833 31.724 76.443

Wood 44.0833 21.724 66.443

The variances (the spread of the residuals) were as follows:

Level Std Dev

Dodds 54.35896

Henry 51.76982

Smith 58.61833

Tremblay 55.5611

Wood 55.96654

Like the QB results, these are interesting. Here Wood and Henry appeared to be slightly closer than the rest. With the top 24, Tremblay appears to be off a little more than the others with his means, but not his variances.

It also appears as though the experts continually underproject the top players.

Most importantly, statistically, none of the predictions were any better than the others. Let me repeat: statistically there is no difference in the results.

I've still got WR's, TE's and D's to do, but if anybody has any input to my methods, I'd be happy to hear it.

Joel

 
Last edited by a moderator:
GoBears84 said:
I've been trying to optimize the experts predictions for the Projections Dominator. I analyzed kickers and QB's yesterday (here) and looked at RB's tonight.

I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I found last years PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006 and was able to open them in Excel.

I took RB Projections from Dodds, Henry, Smith, Tremblay and Woods and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). I ran the data analysis with as many predictions as possible and then again only with only the top 48 and then top 24 RB's based on 2006 actuals.

I then loaded the data into my statistical program (JMP 6.0.3) and did a oneway analysis of the data, assuming unequal variances of the predictions.

As expected, analyzing all of the data showed only minor differences.

According to the analysis, the mean of the residual of the predictions for the top 48 were:

Level Mean Lower 95% Upper 95%

Dodds 17.56 -0.569 35.694

Henry 16.9167 -1.215 35.048

Smith 17.0208 -1.111 35.152

Tremblay 24 5.868 42.132

Woods 12.7292 -5.402 30.861

The variances (the spread of the residuals) were as follows:

Level Std Dev

Dodds 64.20178

Henry 61.82537

Smith 67.34272

Tremblay 64.93434

Woods 60.27057

For the top 24, the mean of the residual of the predictions were:

Level Mean Lower 95% Upper 95%

Dodds 49.125 26.766 71.484

Henry 48.125 25.766 70.484

Smith 45.75 11.288 23.391 68.109

Tremblay 54.0833 31.724 76.443

Woods 44.0833 21.724 66.443

The variances (the spread of the residuals) were as follows:

Level Std Dev

Dodds 54.35896

Henry 51.76982

Smith 58.61833

Tremblay 55.5611

Woods 55.96654

Like the QB results, these are interesting. Here Woods and Henry appeared to be slightly closer than the rest. With the top 24, Tremblay appears to be off a little more than the others with his means, but not his variances.

It also appears as though the experts continually underproject the top players.

Most importantly, statistically, none of the predictions were any better than the others. Let me repeat: statistically there is no difference in the results.

I've still got WR's, TE's and D's to do, but if anybody has any input to my methods, I'd be happy to hear it.

Joel
God bless you guys you sure do a lot of work. I'll make it easy for you. Tier the most talented RB's on the Most talented Teams with the Best O-Lines. When you get a significant seperation of talent, make another tier. Fallow suit with the other skill positions, of course take equal WR Talent and take the one with the most pass happy "O", Qb etc. This projection chicken bleep is goofy and a waste of time. I see stat projections by anyone and its like picking the Lotto. Come on fellas keep it real. Get the math nerds away from the computer and watch some football on sundays. Respectfully and getting ready to purchase your site(you have a great site by the way, a must have any given year, some goofy stuff or not)
 
Two thoughts.

First, have you considered taking the squared residual rather than simple difference? I am concerned about inaccuracies cancelling out.

Second, the statement "statistically there is no difference in the results" is inconsistent with the data. To assert the null that there is no difference in means based on your ANOVA result is an error in statistical conclusion validity.

I see no numbers that were equal.

Now, you might say that you cannot rule out sampling error as an alternative explanation at some level of confidence, but I could just as easily argue you have the entire relevant population and there is no need for inferential statistics (i.e., sampling error cannot be responsible).

-OOK!

 
Well, I don't think Jason Wood is going to be happy on multiple levels here.
Maybe I don't understand why he would have a problem with these numbers unless he just doesn't appreciate the analysis. The numbers show two things: Woods appears to have done a better job than the others with RB's.

More importantly: Over the range of the top 24 and 48 RB's all the experts have statistically similar results - and I think that is important to realize.

I was thinking of some other ways to use these numbers and I'll be testing those ideas over the course of the next week. Some ideas I was considering - separating out the top players and looking at the middle-tiered players, QB's 10-22, RB's 10-34 and WR's 10-48, where a strong draft can make the most impact; combining the top 200 into one analysis; etc. Any other ideas will be considered.

Joel

 
I might also suggest as a measure of accuracy to rank the actual performance and rank the predicted performance from each expert. Than, per expert, compute the correlation between rankings.

Could also do it with raw projections.

This "within-person" correlation is a commonly used measure of accuracy in the social and behavioral sciences.

 
Two thoughts.First, have you considered taking the squared residual rather than simple difference? I am concerned about inaccuracies cancelling out.Second, the statement "statistically there is no difference in the results" is inconsistent with the data. To assert the null that there is no difference in means based on your ANOVA result is an error in statistical conclusion validity. -OOK!
The squared residuals is a good idea and shouldn't be too difficult to analyze. I'll go back an look at the QB's. I stick by the statement that they are statistically the same. I did a couple of analysis on the RB data. An f-test on the variances (which is my inital goal of this excercise) indicated the variances were the same. Then I did a Means Comparisons for all pairs using Tukey-Kramer HSD (nerd alert :clap: ) Alpha 0.05 Abs(Dif)-LSD Tremblay Dodds Henry Smith Woods Tremblay -44.244 -39.286 -38.286 -35.911 -34.244 Dodds -39.286 -44.244 -43.244 -40.869 -39.202 Henry -38.286 -43.244 -44.244 -41.869 -40.202 Smith -35.911 -40.869 -41.869 -44.244 -42.577 Wood -34.244 -39.202 -40.202 -42.577 -44.244 This isn't going to look good as posted, but the analysis says that the differences are withing the statistical confidence levels. Level - Level Difference Lower CL Upper CL Difference Tremblay Wood 10.00000 -34.2440 54.24400 Tremblay Smith 8.33333 -35.9107 52.57733 Tremblay Henry 5.95833 -38.2857 50.20233 Dodds Wood 5.04167 -39.2023 49.28566 Tremblay Dodds 4.95833 -39.2857 49.20233 Henry Wood 4.04167 -40.2023 48.28566 Dodds Smith 3.37500 -40.8690 47.61900 Henry Smith 2.37500 -41.8690 46.61900 Smith Wood 1.66667 -42.5773 45.91066 Dodds Henry 1.00000 -43.2440 45.24400
I might also suggest as a measure of accuracy to rank the actual performance and rank the predicted performance from each expert. Than, per expert, compute the correlation between rankings.Could also do it with raw projections.This "within-person" correlation is a commonly used measure of accuracy in the social and behavioral sciences.
Much harder to do -- but I'll be happy to share my dataset with you :thumbdown:
And as I just added to the QB thread, if this is not PPG it really needs to be.
I can argue the contrary here. Your assumption is that the experts are predicting full seasons. But this isn't necessarily true, otherwise you wouldn't see projections for back-ups. I do believe I've got the data and will try to take a look at it next week.Thanks for all the input. It really is greatly appreciated.Joel
 
Last edited by a moderator:
Well, I don't think Jason Wood is going to be happy on multiple levels here.
Maybe I don't understand why he would have a problem with these numbers unless he just doesn't appreciate the analysis. The numbers show two things: Woods appears to have done a better job than the others with RB's.

More importantly: Over the range of the top 24 and 48 RB's all the experts have statistically similar results - and I think that is important to realize.

I was thinking of some other ways to use these numbers and I'll be testing those ideas over the course of the next week. Some ideas I was considering - separating out the top players and looking at the middle-tiered players, QB's 10-22, RB's 10-34 and WR's 10-48, where a strong draft can make the most impact; combining the top 200 into one analysis; etc. Any other ideas will be considered.

Joel
:thumbdown:
 
Last edited by a moderator:
Well, I don't think Jason Wood is going to be happy on multiple levels here.
Maybe I don't understand why he would have a problem with these numbers unless he just doesn't appreciate the analysis. The numbers show two things: Woods appears to have done a better job than the others with RB's.

More importantly: Over the range of the top 24 and 48 RB's all the experts have statistically similar results - and I think that is important to realize.

I was thinking of some other ways to use these numbers and I'll be testing those ideas over the course of the next week. Some ideas I was considering - separating out the top players and looking at the middle-tiered players, QB's 10-22, RB's 10-34 and WR's 10-48, where a strong draft can make the most impact; combining the top 200 into one analysis; etc. Any other ideas will be considered.

Joel
:thumbdown:
LQTM (Laughing quietly to myself) - Thanks. What can I say - Sometimes you miss the obvious. I've updated my posts.
 
Whoa ... <shrek voice< ... "Hold da phone".

Although I appreciate the analysis and one-way variation model, your analysis is a variance between the predictions and not a measurment of accuracy.

I assume you are using points scored, so take it the next step and measure the accuracy of the predictions. A simple means of doing so would be the Coefficient of Variance.

Using an average of the Std. Dev.s (about 62) and the average of the top 50 RBs in this year's predictions (about 161) as "markers", the CV = 0.373.

At 37% accurate, there are too many variable to make this much more than an excercise of crunching numbers.

Not that I expect anyone short of a Statistics class project to spend the time, but to measure the predicition and performance of single players over a periof of time would provide incredible FF value. Then measure the one-way variance of each predictor with the resulting data for bragging rights.

An awesome project for a statistics class, which I tried to sell to more than a few students when I was teaching adjunct for a while.

... not enough beer for me to justify it personally.

 
I stick by the statement that they are statistically the same. I did a couple of analysis. An f-test on the variances (which is my inital goal of this excercise) indicated the variances were the same.
"You keep using that word. I do not think it means what you think it means."-Inigo Montoya

Question for you Joel:

Suppose you had the average number of hanging chads found in voting from each of the 50 states. Could ANOVA be used to tell you if the means were the same?

 
I stick by the statement that they are statistically the same. I did a couple of analysis. An f-test on the variances (which is my inital goal of this excercise) indicated the variances were the same.
"You keep using that word. I do not think it means what you think it means."-Inigo Montoya

Question for you Joel:

Suppose you had the average number of hanging chads found in voting from each of the 50 states. Could ANOVA be used to tell you if the means were the same?
:shrug: I'm listening to the soundtrack to the Princess Bride right now...at work.You are 100% correct. I'm an engineer and I'll never be accused of talking like a statistician. Obviously the numbers are different. I'll go with, "an analysis comparing the means suggests that they are not statistically different". That's the best I can do.

Joel

 
I stick by the statement that they are statistically the same. I did a couple of analysis. An f-test on the variances (which is my inital goal of this excercise) indicated the variances were the same.
"You keep using that word. I do not think it means what you think it means."-Inigo Montoya

Question for you Joel:

Suppose you had the average number of hanging chads found in voting from each of the 50 states. Could ANOVA be used to tell you if the means were the same?
:confused: I'm listening to the soundtrack to the Princess Bride right now...at work.You are 100% correct. I'm an engineer and I'll never be accused of talking like a statistician. Obviously the numbers are different. I'll go with, "an analysis comparing the means suggests that they are not statistically different". That's the best I can do.

Joel
ookook sounds like he is a Scientist......and he is right.....although I did enjoy the analysis by GoBears84
 
Well, I don't think Jason Wood is going to be happy on multiple levels here.
Maybe I don't understand why he would have a problem with these numbers unless he just doesn't appreciate the analysis. The numbers show two things: Woods appears to have done a better job than the others with RB's.

More importantly: Over the range of the top 24 and 48 RB's all the experts have statistically similar results - and I think that is important to realize.

I was thinking of some other ways to use these numbers and I'll be testing those ideas over the course of the next week. Some ideas I was considering - separating out the top players and looking at the middle-tiered players, QB's 10-22, RB's 10-34 and WR's 10-48, where a strong draft can make the most impact; combining the top 200 into one analysis; etc. Any other ideas will be considered.

Joel
Joel, it's OK...ever since Tiger took the PGA by storm, my name has been miswritten more times than I care to remember. Don't feel too bad, my name was even misprinted as Woods in our first FBG Magazine despite my being the editor! :goodposting:
 
Well, I don't think Jason Wood is going to be happy on multiple levels here.
Maybe I don't understand why he would have a problem with these numbers unless he just doesn't appreciate the analysis. The numbers show two things: Woods appears to have done a better job than the others with RB's.

More importantly: Over the range of the top 24 and 48 RB's all the experts have statistically similar results - and I think that is important to realize.

I was thinking of some other ways to use these numbers and I'll be testing those ideas over the course of the next week. Some ideas I was considering - separating out the top players and looking at the middle-tiered players, QB's 10-22, RB's 10-34 and WR's 10-48, where a strong draft can make the most impact; combining the top 200 into one analysis; etc. Any other ideas will be considered.

Joel
Joel, it's OK...ever since Tiger took the PGA by storm, my name has been miswritten more times than I care to remember. Don't feel too bad, my name was even misprinted as Woods in our first FBG Magazine despite my being the editor! :football:
I feel your pain - I regularly get the "L" knocked out of my name.What's important is that we're taking a legitmate stab at analyzing the predictions. While I'm certain there will be much discussion regarding the correctness of any approach, we've generated the discussion and hopefully we can test different approaches and the resultant data will be interesting, and better still, useful.

Joel

 
Last edited by a moderator:
I've been trying to optimize the experts predictions for the Projections Dominator. I analyzed kickers and QB's yesterday (here) and looked at RB's tonight.

I have the 2006 actual data from this years PD and I'm using standard FBG scoring (4 pts/PTD, 1pt/20 PYD, -1 for INT, 1 pt/10 yds rushing receiving, 6pts/R-R TD). I found last years PD projections (projforxx.php where xx is the initials of the expert) dated 9/4/2006 and was able to open them in Excel.

I took RB Projections from Dodds, Henry, Smith, Tremblay and Wood and subtracted the predictions from the actuals to get a residual (a measure of how far off the prediction was). I ran the data analysis with as many predictions as possible and then again only with only the top 48 and then top 24 RB's based on 2006 actuals.

I then loaded the data into my statistical program (JMP 6.0.3) and did a oneway analysis of the data, assuming unequal variances of the predictions.

As expected, analyzing all of the data showed only minor differences.

According to the analysis, the mean of the residual of the predictions for the top 48 were:

Level Mean Lower 95% Upper 95%

Dodds 17.56 -0.569 35.694

Henry 16.9167 -1.215 35.048

Smith 17.0208 -1.111 35.152

Tremblay 24 5.868 42.132

Wood 12.7292 -5.402 30.861

The variances (the spread of the residuals) were as follows:

Level Std Dev

Dodds 64.20178

Henry 61.82537

Smith 67.34272

Tremblay 64.93434

Wood 60.27057

For the top 24, the mean of the residual of the predictions were:

Level Mean Lower 95% Upper 95%

Dodds 49.125 26.766 71.484

Henry 48.125 25.766 70.484

Smith 45.75 11.288 23.391 68.109

Tremblay 54.0833 31.724 76.443

Wood 44.0833 21.724 66.443

The variances (the spread of the residuals) were as follows:

Level Std Dev

Dodds 54.35896

Henry 51.76982

Smith 58.61833

Tremblay 55.5611

Wood 55.96654

Like the QB results, these are interesting. Here Wood and Henry appeared to be slightly closer than the rest. With the top 24, Tremblay appears to be off a little more than the others with his means, but not his variances.

It also appears as though the experts continually under project the top players.

Most importantly, statistically, none of the predictions were any better than the others. Let me repeat: statistically there is no difference in the results.

I've still got WR's, TE's and D's to do, but if anybody has any input to my methods, I'd be happy to hear it.

Joel
Because apparently I have nothing better to do at work this afternoon... I combined two suggestions and added my own twist to re-evaluate the RB's.I calculated the PPG for each RB based on the number of games claimed to have been played in the 2006 actuals file that comes with PD (it's the last column of data). From there I divided all of the expert predictions by 16 and calculated the residuals by subtracting the predicted from the actual. Then based on the suggestion of ookook, I squared the residuals. I then filtered the data twice - once for the top 24 RB's and once for the mid 24 RB's and then modelled the data.

The mean of the residual of the predictions were (the numbers are shown as the square-root to put them into standard units)

...................Top 24.....Mid 24

Dodds PG2......4.87......3.71

Henry PG2..... 4.78......3.34

Smith PG2......4.85......3.86

Tremblay PG2..5.10.....3.74

Wood PG2........4.71.....3.19

The variances were:

................................Top 24.......Mid 24

Dodds PG2.................5.47.........4.50

Henry PG2.................5.07..........4.20

Smith PG2.................5.51..........4.65

Tremblay PG2............5.62..........4.45

Wood PG2..................5.37.........3.91

The data analysis comparing the means and the variances suggests that the experts are not statistically different (which I hope passes muster as a statement). What is also interesting is that the predictions of the mid-24 RB's are better than that of the top 24 RB's. What this means to me is that the top RB's are tops because they had break-out years. Tomlinson, Jackson, Gore, Parker, and Jones-Drew all scored signifcantly more than projected; where as known studs such as Johnson, Barber and Westbrook all performed as expected - which was still good.

But not without some stinkers: Jordan, Williams, Brown, Perry and Moats all were considerably worse than projected.

Analyzing the 15 RB's in the top 48 with the largest negative delta to predicted, gives a different story:

......Level..................Mean

Dodds PG2 ..................3.06

Henry PG2..................3.85

Smith PG2..................3.91

Tremblay PG2................3.68

Wood PG2..................3.96

.........Level..................Std Dev

Dodds PG2 ..................3.40

Henry PG2..................3.86

Smith PG2..................4.18

Tremblay PG2................3.14

Wood PG2..................3.88

Although the data analysis comparing the means and the variances still suggests that the experts are not statistically different, those are some big differences. A cursory look at the raw data shows that Dodds did not miss as badly on Perry, Moats and White. That's almost 1 point per game better than the rest.

Keep in mind, there is no statistical variance between the experts, though some might appear to be better than others...

Joel

 
Last edited by a moderator:
My IQ is somewhere under 287 so I had to read this entire thread about four times. All I'm sure of is Jason's last name is 'Wood' and not 'Woods'. Bottom line is this shows there's not a significant difference in whose projections we use? Or at least there wasn't last year?

 
My IQ is somewhere under 287 so I had to read this entire thread about four times. All I'm sure of is Jason's last name is 'Wood' and not 'Woods'. Bottom line is this shows there's not a significant difference in whose projections we use? Or at least there wasn't last year?
Yes, you are correct! The majority of the exercise is an effort to determine if there were any statistical differences in the projections used in PD. The answer appears to be no.The other part of the exercise is to determine a good method for evaluating predictions going forward. There are a lot of smart people on this board with great input.FWIW I actually read these through twice before I post them to ensure "I" know what I'm talking about and even then I make the occasional mistake.Joel
 
Nice job GoBears!

Here is what I did last night.

In my analysis, I have used the FBGs initials to indicate individual raters.

I have also used PPG. If you want totals, just multiply the predicted PPG by 16.

Were the projections reliable?

That is, did they converge across raters?

Yes, extraordinarily so.

The average correlation between ratings of the top 55 RBs was around .95. Similar if only top 30 are considered.

One implication of this is that it will be hard and rare for projections of different raters to correlate very differently with actual PPG. That is, although raters would be considered reliable raters of the same thing, the high correlations constrain relationships with other variables to be similar.

Did the ratings under-predict or over-predict RB performance?

All were under-predictions of actual PPG

• For RBs 1 to 30, projections tended to under-predict more (2.7 PPG)

• For RBs 31 to 55, projections slightly under-predicted (.9 PPG)

Were there mean differences in the projections?

Yes. ***

• For RBs 1 to 30, projections were highest from CS and lowest from MT.

• For RBs 31 to 55, projections were highest from JW and lowest from MT.

Whose ratings predicted actual performance the best?

It depends on which range of the distribution of projections you look at.

• For RBs 1 to 30, projected PPG correlated between .54 (CS) and .61 (BH) with actual PPG. The mean of the ratings correlated .58 with actual PPG.

• For RBs 31 to 55, projected PPG correlated between .35 (DD) and .47 (CS) with actual PPG. The mean of the ratings correlated .41 with actual PPG.

• For RBs 1 to 55 combined, projected PPG correlated between .71 (MT) and .74 (BH) with actual PPG. The mean of the ratings correlated .74 with actual PPG.

If you were to combine projections across raters, equal weights would be just as sensible as differential weighting.

You might predict in most cases better just using the most accurate rater.

For completeness, here were the correlations between projected PPG and PPG for the top 30 RBs that will end up being most important in most drafts:

DD = .58

BH = .61

CS = .54

MT = .56

JW = .54

AVG = .58 (where AVG is the mean across raters)

What equation would you use to best predict actual PPG?

• For RBs 1 to 30, PPG = (.56) BH + 7.56

• For RBs 31 to 55, PPG = (.18) MT + 5.78

• For RBs 1 to 55, PPG = (.74) BH + 3.72

OR

For RBs 1 to 55, PPG = (.76) AVG + 3.78 (where AVG is the mean across raters)

When do mean differences matter?

Not really when deriving ranking within position, but much more when comparing projections across positions (e.g., using VBD). To the extent that the amount of under-prediction varies across positions, it will be necessary to pay attention to this and the intercepts in the equations.



PERHAPS THE MOST IMPORTANT IMPLICATION IS THAT USING DODD’S RATINGS ALONE, SUCH AS IS DONE BY DEFAULT IN DRAFT DOMINATOR, IS VERY VERY NEARLY AS GOOD AS ANY WEIGHTED COMBINATION OF THE OTHER RATERS AND IMPORTING FROM PROJECTIONS DOMINATOR.

-OOK!

*** I did find signifcant differences between raters using each RB as the unit of analysis, repeated measures ANOVA, and raters as the within-subject factor. Because the discrepency scores (residuals) are just subtracting what amounts to a constant from each rating, they show the same differences. However, I do not think such tests are very infromative or necessary here as I do not believe there exists a possiblity of obtaining the results by sampliing error alone.

 
Last edited by a moderator:
ookook said:


PERHAPS THE MOST IMPORTANT IMPLICATION IS THAT USING DODD’S RATINGS ALONE, SUCH AS IS DONE BY DEFAULT IN DRAFT DOMINATOR, IS VERY VERY NEARLY AS GOOD AS ANY WEIGHTED COMBINATION OF THE OTHER RATERS AND IMPORTING FROM PROJECTIONS DOMINATOR.

-OOK!

*** I did find signifcant differences between raters using each RB as the unit of analysis, repeated measures ANOVA, and raters as the within-subject factor. Because the discrepency scores (residuals) are just subtracting what amounts to a constant from each rating, they show the same differences. However, I do not think such tests are very infromative or necessary here as I do not believe there exists a possiblity of obtaining the results by sampliing error alone.
Though different than mine, this was an awesome analysis. One point to keep in mind is that this was just the RB's. QB's, WR's, TE's and D's will have different results and the offsets regarding the impact to VBD will need to be taken into account. It will be interesting to see an analysis of the top 100 and 200 players all at once.If anybody else would like to provide their own analysis, I'm happy to provide the EXCEL sheets that I've generated. PM me with your email. Kickers, QB's and RB's are done. WR's have been started and TE's and D' will be done soon. It's a painstaking process to get all of the data to line-up, but I think it's worth the effort.

Joel

 

Users who are viewing this thread

Top