This may seem like nitpicking... and I figure bagger knows what I'm about to say, but some of what he said could be misleading to someone new to the hobby. So I'm going to do it anyway.
That is the typical fallback of people who project stats for players without any regard to constraints. Your projections have to be constrained in a way that is consistent with league history. This means players on the same team have to be within team constraints. This means that players across the league cannot as a group all be higher than what players have been in years past.
Unless you have a good, specific reason for doing otherwise. Such as someone projecting passing and receiving higher going into 2004's point of emphasis on pass interference.Though to nitpick myself, my good specific reason is itself historical production when that same rule change went into effect. But that is still a great segue to my next nitpick
There will only be so many 1,000 yard rushers in 2006. It is easy to justify why double the amount backs that typically rush for 1,000 yards could do it, but you'll be wrong overpaying for marginal backs letting value at other positions go.
This is the typical trap that people fall into with the RB "landgrab" that occurs in the first three rounds. People continually get burned on that next breakout player because they are hoping that RB X will hit the 1,000 yard rushing projection, even though they have already ranked more than the historical amount of 1,000 yard rushers already.
You have to go with historical numbers that reflect what it is you are trying to project. In my example I gave previously, the "correct" historical numbers to have used would have been recent passing #'s, modified up by the same amount as the pass interference rule change has produced in previous seasons.In this case, my nitpick is on the wording used. The number of historical 1,000 yard rushers is heavily influenced by injuries. Unless you feel you can adequately predict which RBs will get injured, you probably aren't projecting injuries and so shouldn't use historical figures including injuries as your bounds.
So if you are going to use AVT to look at rushing yardage, and you just use final season results, you probably are hurting yourself by underestimating the lower RBs more than you are helping yourself. A better measure would be per game stats with some criteria of games as the starter/primary back to eliminate short-term flukes.
If you don't overproject you see the diminishing return of that marginal back in the 3rd round and all of a sudden there are some players at another position that stand out as a clearly better alternative.
I also want to point back up to what MT said earlier as his point #3. If you are going to build into your numbers the uncertainty that you have in your projection, it isn't a bad thing to be under historical numbers.That is something I struggle with. It doesn't intuitively seem right to me to lessen the value of the top players while effectively increasing the value of the middling players (since top players can only go down, while middling players have a chance of vastly outperforming what you set for them). But when I think through it logically, I don't know that it is wrong to do it that way. If I think Peyton's most likely point total is 300 and Delhomme is 200... but the true EV (expected value) is more like Peyton 260 and Delhomme 220... if I ignore that I am not building into my value comparison my true feelings on the range over which both players may end up scoring.
It's probably fine that you do it either way, as VBD should just be 1 tool in your decision making. If you project Peyton at 300 pts, you still need to factor in your uncertainty in that number in some other fashion when making your decisions. While it is nice to capture it in your projection for quicker use, it also may add enough extra work to making your projections that many may prefer to just gut that part out.