What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

How much to react, exactly (1 Viewer)

Pyramid Scheme

Footballguy
All the talk in various threads about not panicking on the basis of one week's results got me wondering about how you adjust your production projections across the board to account for new information.

I chose a simple method--a EWMA model (exponentially weighted moving average)--to try to project weekly average point production. Each week, you generate a new projection by taking a weighted average of your old projection and that week's production. How sensitive your model is depends on how much weight you give the previous projection.

So, for example, if you had Larry Fitzgerald projected to produce 25 PPW under your scoring system, and this week he posted a 7, then with a weighting parameter of 0.6, your new projection would be 17.8.

I took 2006 data and examined the top 30 QB and top 60 RB (haven't had time to examine other positions yet) to try and calibrate the model. The idea is to find a weighting parameter that minimizes the error in projections over all these players, each week. I wasn't expecting to get the same weight for both positions, but I did: 83% for QB, 82% for RB.

With an 83% weight on the previous projection, the RMS errors were 14 points per week for QB and 11 for RB. Before you scoff at these too hard, consider that under this scoring system the standard deviations for individual QB's in 2006 were in the neighborhood of 11-14 points per week, and for RB's were 8-12 points.

Obviously, this is no substitute for doing your homework and knowing the individual players' situations, but if you're looking for a guideline--or a reality check to make sure you're not being excessively pessimistic or optimistic--80/20 seems like a good rule of thumb: 80% what you thought last week, 20% what you actually saw this week.

 
All the talk in various threads about not panicking on the basis of one week's results got me wondering about how you adjust your production projections across the board to account for new information.

I chose a simple method--a EWMA model (exponentially weighted moving average)--to try to project weekly average point production. Each week, you generate a new projection by taking a weighted average of your old projection and that week's production. How sensitive your model is depends on how much weight you give the previous projection.

So, for example, if you had Larry Fitzgerald projected to produce 25 PPW under your scoring system, and this week he posted a 7, then with a weighting parameter of 0.6, your new projection would be 17.8.

I took 2006 data and examined the top 30 QB and top 60 RB (haven't had time to examine other positions yet) to try and calibrate the model. The idea is to find a weighting parameter that minimizes the error in projections over all these players, each week. I wasn't expecting to get the same weight for both positions, but I did: 83% for QB, 82% for RB.

With an 83% weight on the previous projection, the RMS errors were 14 points per week for QB and 11 for RB. Before you scoff at these too hard, consider that under this scoring system the standard deviations for individual QB's in 2006 were in the neighborhood of 11-14 points per week, and for RB's were 8-12 points.

Obviously, this is no substitute for doing your homework and knowing the individual players' situations, but if you're looking for a guideline--or a reality check to make sure you're not being excessively pessimistic or optimistic--80/20 seems like a good rule of thumb: 80% what you thought last week, 20% what you actually saw this week.
http://subscribers.footballguys.com/2007/07stuart_goose1.php
 
Chase Stuart said:
Not surprised that others have tackled the same problem. This method is similar, but not the same as the one I suggest above. For fun, I looked at the quality of RB projections offered by both methods on the 2006 data set. The RMS error on the top 60 RB's for the method in this article was 10.1; by the method I suggest above, it's 10.6.So they're comparable in terms of results. It doesn't surprise me that the ones in the article are slightly better, since they involve estimating one parameter per week per position, whereas the one I suggested involves estimating only one parameter for each position, and I'm doing the comparison on the data that was used to calibrate the article's model.

Obviously, more parameters will work better if you're looking to fit historical data. For projection purposes, models with fewer parameters are generally better, since calibration across the whole data set (rather than weekly slices) will give you higher-quality estimates. Certainly it's suggestive that the projection weight turned out the same for both of the positions tried so far.

The main difference in behavior is that the model in the article weights all weeks equally; the EWMA model assigns greater weights to more recent weeks. That gives it more wiggle, which is good if you want a method that responds to trends. For example:

Suppose we've projected a player to produce 10 PPW. Over the first 8 weeks, his results are 10, 10, 10, 10, 10, 1, 1, 1. By the method in the article above, the projections are:

1 10

2 10

3 10

4 10

5 10

6 10

7 9.318181818

8 8.491145218

9 7.660276074

Whereas under a EWMA model at 83%, they are:

1 10

2 10

3 10

4 10

5 10

6 10

7 8.47

8 7.2001

9 6.146083

It doesn't overreact, but it does respond more quickly to changes in a player's circumstances.

 
Just ran the numbers for the top 100 WR's in 2006 and got a weighting parameter of 85%, which is definitely in the neighborhood of the others. Any choice 0.82-0.85, applied to all positions, produces errors across the board that are very close to their individual minima.

 
Chase Stuart said:
Not surprised that others have tackled the same problem. This method is similar, but not the same as the one I suggest above. For fun, I looked at the quality of RB projections offered by both methods on the 2006 data set. The RMS error on the top 60 RB's for the method in this article was 10.1; by the method I suggest above, it's 10.6.So they're comparable in terms of results. It doesn't surprise me that the ones in the article are slightly better, since they involve estimating one parameter per week per position, whereas the one I suggested involves estimating only one parameter for each position, and I'm doing the comparison on the data that was used to calibrate the article's model.

Obviously, more parameters will work better if you're looking to fit historical data. For projection purposes, models with fewer parameters are generally better, since calibration across the whole data set (rather than weekly slices) will give you higher-quality estimates. Certainly it's suggestive that the projection weight turned out the same for both of the positions tried so far.

The main difference in behavior is that the model in the article weights all weeks equally; the EWMA model assigns greater weights to more recent weeks. That gives it more wiggle, which is good if you want a method that responds to trends. For example:

Suppose we've projected a player to produce 10 PPW. Over the first 8 weeks, his results are 10, 10, 10, 10, 10, 1, 1, 1. By the method in the article above, the projections are:

1 10

2 10

3 10

4 10

5 10

6 10

7 9.318181818

8 8.491145218

9 7.660276074

Whereas under a EWMA model at 83%, they are:

1 10

2 10

3 10

4 10

5 10

6 10

7 8.47

8 7.2001

9 6.146083

It doesn't overreact, but it does respond more quickly to changes in a player's circumstances.
That's a very interesting point, PS. It won't make it for this week, but that's definitely something to consider for week three or four (depending on how many installments I run with). I'm usually in the camp of averages >>>> splits, but you never know.(Splits in the sense that first half/second half splits are generally meaningless for help in predicting Year N+1 results. I'd suspect the same thing here, but I could be wrong. Your method is more apt to track real changes, like when Portis or MJD started getting the load as rookies.)

 

Users who are viewing this thread

Back
Top