The weightings shouldn't necessarily weight the best players.
I just ran a quick experiment.I gave TEAM A six wide receivers, and I assumed their per-week averages were: 20, 18, 16, 14, 12, 10.
I gave TEAM B seven wide receivers, all with per-week averages of 12.86.
If I weighted the top 7 WRs equally (including a zero for Team A's seventh slot), both teams would have the same projection: 90 points per week.
First I assumed that each WR's scoring was normally distributed with standard deviation equal to half his expected point totals. Then I ran 100,000 simulated weeks and looked at the total score of each team's top 3 WRs:
Team A: 63
Team B: 55
If you count the top 4 WRs, you get this:
Team A: 76
Team B: 68
If you use a
lognormal distribution instead of a normal, which I think is probably a little more realistic, you get very similar results. If you mess around with the standard deviations, it doesn't change the conclusion.
Even if you take away Team A's last WR, so that Team A is playing with 5 and Team B is playing with 7, Team A's top 3 WRs are still expected to outscore Team B's top 3 WRs. If you take away Team A's last two WRs, so that Team A is playing with 4 and Team B has 7 WRs with a total point expectation 22 points higher than Team A's, Team A still has a higher point expectation than Team B.