What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

FBG Math Geeks...determining player values (1 Viewer)

jerseyh8r

Footballguy
As I have become more and more engrossed in FF over the years, I have become increasingly interested in looking into making my own projections, and this is the first season that I have been formulating my own projections for each team and then determining each players value in relation to the others at the same position. For ease, I have been going through all of the teams alphabetically (am almost done with Jax) and trying to create a spreadsheet that will be flexible and change very easily throughout the offseason and allow me to simply to use the same spreadsheet every year with different names and projections.

As I am completing the spreadsheet and determining the player's FF points, the problem then comes into weighing these scores. In an MFL dynasty league in 1/2 ppr for instance (agree or disagree), I currently have Rudi Johnson projected at 223, Edge at 220, and JLewis at 219.

So....I am trying to determine the best way to equally weigh these players beyond my projections. This is what I have come up with so far....please help me perfect the system:

I have decided to use a worst starter baseline to determine how much moreless valuable one player is than the worst starter (via percent scoring of worst starters scoring)....I have determined I am calling this figure a players "likeability". So far, Kevin Jones is my worst starter (10 team league, single RB start) and has a likeability score of 100, and Rudi is 108, Edge 107, Jamal 106. The variables that I wish to use to alter their likeability include (so far) variation from game to game, health, SOS, and weeks 15-16 (weather potential and SOS).

This is where I need some assistance (and can provide futher details if necessary).

I am using a coefficient of variation (CV) to determine the variation between FF points in each game throughout the player's last full season (the best scores are closest to zero, with over 1.0 being a poor figure). I am tiering the SOS into 4 tiers (based on the actual values, not into groups of 8) with values of 0, 0.3, 0.6 and 1 with 0 being the easiest . So far, the best I can come up with for injuries is simply assigning an admittedly arbitrary risk value to a player whereby the highest risk player (i.e Ahman Green, Domanick Davis) lose enough value to lose 3 games worth of FF points)....again, a 0 to 1 scale with 0, .3, .6 and 1 being the values. Playoff weather is determined by if the games scheduled are to take place in a northern climate (0, .5, 1.0 as the values with 0 being no games, and increasing to 1.0 if both semis and finals are in northern cities outdoors.

I am currently weighing variation and injury risk the most, with SOS and playoff variables only carrying little strength. So, please critique what I am doing and how much I am weighing certain variables:

X= ((ff pts/ff pts of worst starter)*100)

Likeability=X-(.20*X*CV)-(.15*X*Inj Value)-(.1*X*SOS)-(.025*X*playoff weather value)-(.025*X*playoff SOS)

What say ye????

 
If it were me, I think I'd have a few questions about each factor you're including that I would want to answer.

First: Have I established that the factor is something that is predictable enough to be worth including? For example, if you're talking about using SoS based on last season's team records, and someone (Doug Drinen) has found that it doesn't seem to have any bearing on actual outcome, then you might not want to use it at all.

Second: If I've established it is something predictable enough to be worthy of inclusion... then can I determine what amount it actually varies things and use it? For example, if you're going to include something for playoff weather, then I think you need to start with "How much and in which direction does a RB's stats change in outdoor stadiums up north in December, vs his stats the rest of the year?"

For something like that, I wouldn't just come up with some arbirtrary 0-1 rating. If you believe that a RB will drop 1 FPG during your playoffs, then I would use that number and I would then find a way to relate how important that is to me vs his contribution during the regular season. I.e. If I expect a given RB to score 9 FPG during the regular season, and expect him to drop to 8 FPG because of weather for my fantasy playoffs... would I rather have him over a guy who will play at 8.5 FPG all season long including the playoffs? I'd need to find where I would draw the line as the players being of equal value in my mind, and find a way to reflect that numerically.

To sum up what I've said, I personally think you would be better off deciding how the factors affect the player's value to you, and then find a way to model it... rather than start with a model (0, .5 or 1.0 times some factor) and then figure out what the numbers should be.

 
Thanks for the response, you raise some thought provoking questions.

For example, if you're talking about using SoS based on last season's team records, and someone (Doug Drinen) has found that it doesn't seem to have any bearing on actual outcome, then you might not want to use it at all.
Link, please?
 
Thanks for the response, you raise some thought provoking questions.

For example, if you're talking about using SoS based on last season's team records, and someone (Doug Drinen) has found that it doesn't seem to have any bearing on actual outcome, then you might not want to use it at all.
Link, please?
I would PM Doug.
 
Thanks for the response, you raise some thought provoking questions.

For example, if you're talking about using SoS based on last season's team records, and someone (Doug Drinen) has found that it doesn't seem to have any bearing on actual outcome, then you might not want to use it at all.
Link, please?
I would PM Doug.
I was pretty sure someone here mentioned it and pointed me to the blog at pro-football-reference.com, though searching there I'm not seeing it. As I recall it was about that if you used the end-of-season SoS (i.e. 20/20 hindsight), there was a correlation to how teams did. But using the pre-season SoS based on last year's results, there wasn't a correlation in how the next year's results went.
 
But anyway, I just wanted to make the larger point that before I tried to add in a factor, I'd first want to see if it made a difference, and how much. For instance I can't recall off-hand any studies about how weather affects players though I'm sure someone has done them. But before I was going to use it in my valuation of players I would definitely want to see what the actual affect (if any) was before I based a decision on it.

 
Sorry for the triple post... found it on the board here.

A blog entry inspired by this thread:

http://www.pro-football-reference.com/blog/wordpress/?p=17

The NFL schedule was released last week. Like most people who are neither season ticket holders nor executives for FOX or CBS, I like the new flexible scheduling plan that will allow more interesting games to be shown on Sunday nights.

As has been noted elsewhere, the toughest schedules (based on last year’s records) belong to the Giants and Bengals, whose 2006 opponents were a combined 139-117 in 2005. The Bears have the easiest slate; their opponents were 114-142 last year.

But as we all know, some teams that were bad in 2005 will be good in 2006 and vice versa. And some schedules that look easy right now will actually be tough and vice versa. The question is: to what extent, if any, do the Bears have an advantage over the Giants because of their schedules. Two games? One game? Half a game?

To investigate this, I went back to 1990 and recorded three bits of data about every team.

1. their own record in Year N-1

2. their preseason estimated strength of schedule. I.e. the combined Year N-1 records of the team’s Year N opponents.

3. their record in Year N

For the 2005 New York Jets, for example, I have

1. .625 (their 2004 record was 10-6)

2. .535 (the combined 2004 record of their 2005 opponents)

3. .250 (their 2005 record ended up being 4-12)

I then labeled every team, based on their Year N-1 performance, as either Very Bad (less than 5 wins), Bad (5 or 6 wins), Mediocre (7 to 9 wins), Good (10 or 11 wins), or Very Good (12 or more wins). I also labeled each team’s projected schedule as either Easy (combined opponents record under .500) or Hard (over .500).

Take a look at the Very Bad teams, for example. The Very Bad teams with a projected Easy schedule averaged 6.44 wins the next year. The Very Bad teams with a projected Hard schedule averaged 6.63 wins. The difference is not significant, and that’s the point. Here is the complete breakdown:

Average Wins in Year N Easy Sched Hard SchedVery Bad in Year N-1 6.44 6.63 Bad in Year N-1 7.67 7.26Mediocre in Year N-1 7.82 8.27 Good in Year N-1 8.94 8.57Very Good in Year N-1 8.78 10.06 TOTAL 7.73 8.27An eyeballing of this table indicates that the estimated schedule strength is essentially irrelevant and official statistical tests confirm that. [For example, a regression of Year N record on Year N-1 record and projected Year N schedule strength produces a not-even-close-to-significant coefficient for schedule strength.]Note that I’m not saying that schedule strength isn’t important. Some teams will have harder schedules than others in 2006 and it will make a difference. The point is that these strength-of-schedule estimates that are being thrown around right now seem to have no role at all in determining teams’ 2006 records.
 
If I expect a given RB to score 9 FPG during the regular season, and expect him to drop to 8 FPG because of weather for my fantasy playoffs... would I rather have him over a guy who will play at 8.5 FPG all season long including the playoffs? I'd need to find where I would draw the line as the players being of equal value in my mind, and find a way to reflect that numerically.
Obviously, I don't value it very highly at all (nearly a 10th of what I value consistency), but would other value it more or less? The way the formula works, it only would help me choose in a virtual dead-heat situation.
 
Sorry for the triple post... found it on the board
No problem, it is greatly appreciated. Gotta get back to work for a while, and will respond in more detail later. But at first glance this seems to draw correlations for team's overall records, not the performance of particular players based on the "Ultimate SOS" as featured at FBGs.com (which is the SOS I was planning on using for the aforementioned formula).
 
If I expect a given RB to score 9 FPG during the regular season, and expect him to drop to 8 FPG because of weather for my fantasy playoffs... would I rather have him over a guy who will play at 8.5 FPG all season long including the playoffs? I'd need to find where I would draw the line as the players being of equal value in my mind, and find a way to reflect that numerically.
Obviously, I don't value it very highly at all (nearly a 10th of what I value consistency), but would other value it more or less? The way the formula works, it only would help me choose in a virtual dead-heat situation.
I'm sure the topic has been discussed before, though I don't recall anything on it recently. Might want to do some searches and if you don't find anything recent it would be an excellet topic for a whole thread of its own. I don't just mean weather effect on a player, but the overall question of how much of your decision to draft a player comes down to your beliefs about his regular season and how much on your beliefs of his performance come playoff time.
 
Hey there Jersey Hater:

I've been doing this a long, long time, and want to give you some simple advice.

1) Move out of Northern New Jersey. :D

2) Don't get bogged down trying to incorporate the effects of things like weeks 14-16 SoS and (especially) weather into your projections. Adding such factors gives you a sense that you are closer to predicting a reality 4 months into the season, but it is illusory. All the changes that occur to both offenses and defenses during the first 13 weeks (injuries and effectiveness), as well as the statistical unpredictibility of one particular game's numbers, make preseason adjustments like these useless and futile.

Keep things simple. There is more than enough work to do just focusing on POTTS.

Production = Opportunity + Talent + Team (that is, surrounding talent) + System.

If you can do a good job of weeding through all the factors that go into the above acronym to come up with solid projections, you will have accomplished more than most. It is a monumental chore just to do that properly.

Chance plays a huge role in this hobby, and adding too many obscure bells and whistles actually detract focus from what is important, and IMO add nothing of real measureable value.

 
Last edited by a moderator:
1) Move out of Northern New Jersey. :D
2 years and counting....can't wait.
Keep things simple. There is more than enough work to do just focusing on POTTS.Production = Opportunity + Talent + Team (that is, surrounding talent) + System.
Keep it simple stupid....Production=Opportunity+Talent+Team+SystemNot bad, a like the acronym. The problem is I also liked my team last year before a bunch of :ptts: happened. The two biggest things that bit me were inconsistency of my top players when they weren't injured, maybe trying to overcompensate this season forf bad luck last.So, as of now, I am willing to get rid of weeks 15-16 in my little home-made formula.Now it is begininng to look simplified:X= ((ff pts/ff pts of worst starter)*100)Likeability=X-(.20*X*(CV of player-best CV of position))-(.15*X*Inj Value)-(.1*X*SOS)What do we think? Project for 16 weeks. Compare to worst starter for value. Adjust that value to account for consistency (historically), injury risk, and to a smaller extent for overall "ultimate SOS" for the player's position. Weigh each of them less (respectively). I know, not completely KISS-POTTS, but I feel as though this takes into account variables that 16 week projections don't (i.e. health of DD, or Mike Vick blowing up for 100 yrds & a couple TDs rushing 2x during the season, thus skewing his stats to be less helpful in head-to head leagues than they may appear).ETA: Thanks for all of your productive input guys. Much appreciated. You are keeping me from wasting valuable time and proving to be a good "devil's advocate" to my overly compulsive mind.
 
Last edited by a moderator:
Nice Salsa Shark.
Thanks...should have spent the time doing something more productive, I'm sure....but this seems to have spread more joy than anything else I could have done.Snoogans

 
I'd echo what these guys are saying above. Also, if you want to really get an idea of how good your projections will be, run them as if you were doing the analysis before last season started and see how much the factors you are considering weigh on the process. Obviously, things change, but if you're going to do this, it will help you see what looks useful and what doesn't so you can make adjustments before it shoots your team in the foot.

 
At the risk of responding without reading anyone else's response beyond the first thread and repeating what someone else has already iterated...

When you get numbers like that, which will happen a lot, you should look at three things:

1) consistency of hitting near average PPG

2) week 15-16 opponents

3) high-low risk assessment.

Does this guy consistently produce week in and week out, or is he hot/cold hit/miss wear down, etc?

When it comes time for the playoffs, who's he going up against? @DEN, @NE? or v.OAK, v.SF?

What is the likelihood he blows your projections away? What is the likelihood the system fails and he lays dimes and nickels all year?

Answering these three questions will give you a clear winner. If it doesn't, then go with the team that will be in games more often. If you're still tied, just go with your gut.

Numbers can only get you so far. I use them as much as anyone here, but it has to support your opinions, not define them.

 
Last edited:
I don't have a link, but it's been discussed here before regarding player consistency. The main conclusion was that players who show consistency in Year X do not necessarily show that same consistency in Year X+1 and this goes both ways. In other words, you could run the week to week standard deviation of players' fantasy performance in Year X and it will be almost worthless in terms of predicting who will be consistent on a weekly basis in Year X+1.

I like what you're trying to do, but like GregR said, if none of these variables show any future predictive value you're just spinning your wheels. And it sounds like that is the case with most of the variables you're working with.

 
Last edited by a moderator:
I don't have a link, but it's been discussed here before regarding player consistency. The main conclusion was that players who show consistency in Year X do not necessarily show that same consistency in Year X+1 and this goes both ways. In other words, you could run the week to week standard deviation of players' fantasy performance in Year X and it will be almost worthless in terms of predicting who will be consistent on a weekly basis in Year X+1.

I like what you're trying to do, but like GregR said, if none of these variables show any future predictive value you're just spinning your wheels. And it sounds like that is the case with most of the variables you're working with.
That's from an old article by Doug called The Bell Curve and Fantasy Football. It's down in the conclusions section:
Update (7/7/2000): I ran some numbers on this, and it turns out that my guess was on the mark. That is, consistency in year N is a very weak indicator of consistency in year N+1. To be specific, I looked at all players from 1995-1998 who had over 50 fantasy points, and I measured their game-by-game standard deviation for that year and the next year. For RBs, I got a correlation coefficient of .09. For WRs, it was .29, and for QBs, it was -.11. Players who were consistent one year showed no strong tendency to be consistent the following year.
 
When you get numbers like that, which will happen a lot, you should look at three things:

1) consistency of hitting near average PPG

2) week 15-16 opponents

3) high-low risk assessment.
Strongly agree...
 
I don't have a link, but it's been discussed here before regarding player consistency. The main conclusion was that players who show consistency in Year X do not necessarily show that same consistency in Year X+1 and this goes both ways. In other words, you could run the week to week standard deviation of players' fantasy performance in Year X and it will be almost worthless in terms of predicting who will be consistent on a weekly basis in Year X+1.

I like what you're trying to do, but like GregR said, if none of these variables show any future predictive value you're just spinning your wheels. And it sounds like that is the case with most of the variables you're working with.
That's from an old article by Doug called The Bell Curve and Fantasy Football. It's down in the conclusions section:
Update (7/7/2000): I ran some numbers on this, and it turns out that my guess was on the mark. That is, consistency in year N is a very weak indicator of consistency in year N+1. To be specific, I looked at all players from 1995-1998 who had over 50 fantasy points, and I measured their game-by-game standard deviation for that year and the next year. For RBs, I got a correlation coefficient of .09. For WRs, it was .29, and for QBs, it was -.11. Players who were consistent one year showed no strong tendency to be consistent the following year.
:goodposting: ThanksETA: Gonna look at top 10 RB last season compared to 2003 and 2005 with correlation coefficient (so long as I can figure it out) and post back later....thanks for the input guys.

 
Last edited by a moderator:
:hot: :hot: :hot: :hot:

:wall: :wall: :wall: :wall:

(FRUSTRATED IS ALL)

Thanks for all of the quality input here guys....I really do appreciate it.

I have been trying to take a somewhat "common sense" approach to my formula, but there seems to be little basis for the notion that there is any "sense" to it at all. I have spent the last bit of time determining/confirming what Doug had observed in 2000. It wasn't so much that I didn't believe as much as the idea that I was convinced that things might have changed somewhat, or at least that the numbers would favor some reliablity if you looked only at the top players at a position.

Doug, if I read correctly, took the game logs of all the guys and determined their consistency using a standard variation from the mean formula (I hope I am using these terms correctly, I work in the medical field for cryin' out loud). Then he looked to see if there was a correlation between years N and N+1 for 3 seasons. He looked at all players who scored more than 50 FF points. He said that in his research, he had the strongest correlation with RBs.

I was looking at my variations (esp for RBs) and they looked significant to me, so I thought that maybe if I looked at the starting RBs who had played somewhat consistently for the last 3 years (with a reputation as being somewhat consistent), maybe the stats would lean to my favor. I was so wrong!! The numbers looked significant, but statistically, there is little correlation whatsoever.

To keep it simple....I looked at the last 3 seasons of Tiki, Edge, Rudi, LT, SA, JLewis, WDunn, FTaylor, Portis, and BWest. Using a Correlation Coefficient, where the Value of 1.0 or -1.0 are perfectly correlated (i.e. (1,2) (2,4) (3,6) (4,8)....a perfect line on the X and Y Chart at a 45 degree angle)......the overall correlation was a measily 0.14. I was legitimately surprised. Then (for poops and smirks), I grabbed the guys (to make a small sample) that I thought seemed to be the most consistent over the last 3 seasons (Portis, Rudi, SA, LT, Edge, Tiki) and their correlation was an even worse 0.07. Keep in mind the value of "0" means no statistical correlation whatsoever.

Based on Doug's previous analysis where he found there to be a smaller correlation amongst WRs and QBs (and a sheer lack of time on my part), I am abandoning the notion of incorporating consistency into my so-called formula. This has, however, peeked my curiosity to determine if there is any correlation between games missed from injury one year to another....seeing as injury is now the only thing that remains from my once glorious (sarcasm) equation.

Looks like this is going to become a system of projections and mere gut feelings to help decide between "equally projected players". Unfortunately, that was how my team :X 'd last year.

 
It might be worthwhile to consider the difference between within-season consistency vs. across season consistency.

I see little value in the former, some in the latter.

I tend to give an upward adjustment based on how many seasons a player has been top 10 in their position.

The problem with desiring players who have a small standard deviation within a season is that you could never beat a team better than yours. BUt with higher within-season SD players you might.

 
The problem with desiring players who have a small standard deviation within a season is that you could never beat a team better than yours. BUt with higher within-season SD players you might.
I agree...what prompted my thoughts on this was what I perceived as the Vick factor. I projected him fairly high, but I also believe he gets a large portion of his running stats in a few games, thus making him less attractive than other QBs I had projected with similar stats. I was trying to find a measurable means of analyzing the risk beyond my gut instinct, but am having little success to date (as the thread would indicate). I wouldn't consider strictly using SD as a means of selecting from a large group of players, but would use it as a factor in selecting player A vs B if they were projected the same....I see your point about over-achieving, I suppose I just hope to do well enough with my selections that consistently perform better than players on other FF teams. Only time will tell.
 
Based on Doug's previous analysis where he found there to be a smaller correlation amongst WRs and QBs (and a sheer lack of time on my part), I am abandoning the notion of incorporating consistency into my so-called formula. This has, however, peeked my curiosity to determine if there is any correlation between games missed from injury one year to another....seeing as injury is now the only thing that remains from my once glorious (sarcasm) equation.

Looks like this is going to become a system of projections and mere gut feelings to help decide between "equally projected players". Unfortunately, that was how my team :X 'd last year.
The number of times that Doug's name comes up when getting hard evidence on determining what is an FF myth and what isn't, gives an indication of how great this guy is. Not to mention that he has operated pro-football-reference.com for so long on just donations and (only recently) advertising, operating at a loss I'm sure. (That's a plug for people to go make a donation if it wasn't clear enough.)Anyway, Doug has looked at this too in Everybody is an Injury Risk. The article is worth a full read, but the 3 parts I considered most significant were:

* From the article you can extract what the average # of games a starter at a position might be expected to miss ... and from that and your league setup, find what # of games you might expect your backup to play just due to injury and bye week (not counting if he outperforms the starter).

* RBs and QBs who played all 16 games last year (or even all 32 games over 2 years) are less than 50% likely to play all 16 games the next year. WRs are a lot more likely at about 2/3 playing the full next season.

* Players who played all the games last year (or over the last 2 years) on average will play more games the next year than guys who were injured. It varies by position, but for the RBs who we worry about the most with injuries, it is less than a game's difference on avearge.

 
:hot: :hot: :hot: :hot:

:wall: :wall: :wall: :wall:

(FRUSTRATED IS ALL)
Maybe so, but through that you are growing. You may end up confirming your original ideas or find you want to throw them out altogether, but having gone through similar processes with all sorts of ideas over the years (I've been in FF 26 years now), I can tell you that working through the process will be a good learning experience. Hang in there.Remember in an earlier post I mentioned that I now concentrate on POTTS and don't tweak based on a 3 game playoff period stretch of games? I've learned that even the very best of forecasters don't come close enough to actuals to warrant that sort of fine tuning. That's what I believe, but I think it's good that you are applying your analytical abilities to deepen your understanding of what does and does not matter, testing things for yourself.

Good luck.

 
Last edited by a moderator:
Anyway, Doug has looked at this too in Everybody is an Injury Risk. The article is worth a full read, but the 3 parts I considered most significant were:
These links have been a wealth of information and (again) are greatly appreciated. Are these pages that you continue to reference pages that you have bookmarked for yourself over the years, or is there an index somewhere with an assortment of articles (esp. by Drinen)? Are they pages that you have read and that you are googling for upon remembering their content?Just curious...TIA.

 
Good luck.
Thanks....I spent the last 2 years spending more time in the AC, trying to practice out scenarios that I may encounter, rather than looking at the big picture. I spent time looking at the projections of others (avg'ing 4-5 sources) to determine value, but hadn't really grown as much as I would like as a " ff player". I feel as though spending time in the SPool has been a much better means of honing in on my own strengths and weaknesses. This thread is a perfect example of that.
 
Anyway, Doug has looked at this too in Everybody is an Injury Risk. The article is worth a full read, but the 3 parts I considered most significant were:
These links have been a wealth of information and (again) are greatly appreciated. Are these pages that you continue to reference pages that you have bookmarked for yourself over the years, or is there an index somewhere with an assortment of articles (esp. by Drinen)? Are they pages that you have read and that you are googling for upon remembering their content?Just curious...TIA.
Oops...google ended up being my friend afterall (LINK)
 
Anyway, Doug has looked at this too in Everybody is an Injury Risk. The article is worth a full read, but the 3 parts I considered most significant were:
These links have been a wealth of information and (again) are greatly appreciated. Are these pages that you continue to reference pages that you have bookmarked for yourself over the years, or is there an index somewhere with an assortment of articles (esp. by Drinen)? Are they pages that you have read and that you are googling for upon remembering their content?Just curious...TIA.
Oops...google ended up being my friend afterall (LINK)
Drinen, as you are learning, has been on the forefront of much FF statistical research. His articles are a must read, as is his blog on the linked site. He is a math teacher, FBG staff member, regular poster on the board, and all around good guy.
 
Last edited by a moderator:
By the way, if you go to the blog you'll see that the recent articles are by Chase Stuart (another FBG staff member), but this is generally Doug's blog. Doug is on vacation right now and Chase is filling in for awhile.

 
Anyway, Doug has looked at this too in Everybody is an Injury Risk. The article is worth a full read, but the 3 parts I considered most significant were:
These links have been a wealth of information and (again) are greatly appreciated. Are these pages that you continue to reference pages that you have bookmarked for yourself over the years, or is there an index somewhere with an assortment of articles (esp. by Drinen)? Are they pages that you have read and that you are googling for upon remembering their content?Just curious...TIA.
Oops...google ended up being my friend afterall (LINK)
They are articles I remember and then google to go find. They should reside at PFR in the articles (and also check his blog for new stuff), but google finds them the quickest. :) There are a lot of people who seem against the thought of projections and spending time doing studies of historical stats and such... but to me the kind of stuff that Doug does is where you get a lot of bang for the buck. Finding out what factors can actually help you over the long haul if you include them appropriately, and which are red herrings that people waste time on.

 
Maybe so, but through that you are growing. You may end up confirming your original ideas or find you want to throw them out altogether, but having gone through similar processes with all sorts of ideas over the years (I've been in FF 26 years now), I can tell you that working through the process will be a good learning experience. Hang in there.
Very true. I seem to try something new every year (and every rookie draft as well). I tend to fall back to the same old approaches in the end, but it's a fun exercise to try formulas and dynamic VBD and such - I learn things, plus it's just fun to play around with numbers like that.The only thing that I've really incorporated heavily is a spreadsheet that tracks drafts and does some basic analysis. Spec1alk here has improved on my original ideas, and now I use his sheet - it's pretty slick, and it can really help to be able to pull up all the info on 8 No Mercy redrafts that are all in progress, or 48 Zealots rookie drafts. You can take some chances that you wouldn't have if you didn't have decent ADP to go from (example from yesterday: I rate Driver one spot ahead of DJackson this season, but Driver's ADP from the other No Mercy drafts is lower, so I took Jackson, then got Driver five picks later).

 

Users who are viewing this thread

Back
Top