comfortably numb
Footballguy
it's like a nerdfest orgy up in here
I've got david nelson pencilled in for a 0 all season.There isn't a single player on any of my rosters I think will score "around" a static number all season long.
Well, OK, let's say you project someone to hit 40 homers in a season, approximately 1 every 4 games. He hits 10 in the first 10 games. You should either expect him to continue to hit about 1 in 4 (which would give him with about 48 for the season), or to hit more than 1 in 4 based on how he's actually performing this year. To expect him to hit just 30 homers for the rest of the season (1 in 5) because you'd originally projected him for 40 and he already hit 10 would be insane.Why are you bringing average into the discussion? A better example that is more analogous to fantasy football would be the number of hits or homeruns for a season.
You should read some of the baseball discussions on this stuff. Those guys are hardcore numbers nerds. I mean HARDCORE.it's like a nerdfest orgy up in here
You and the others keep avoiding this extremely simple question which will immediately show that your reasoning here is flawed:If you have a player projected for 320 points for the season, and then the length of the season is lengthened or shortened, would you leave his projected total the same, or would you adjust it based on the change to the number of games he's expected to play?I think we have a disagreement here. Say Rice is projected for 320 for the season. IMO, that's really a range centered on 320. Inherent in that projection is the recognition that he WON'T score 20 points every week. He might not score 20 points in a single week and still hit the projected total spot-on. If he scores 30 in week 1, do you abandon your projection immediately? Do you return to your projection if he scores 15 the next 2 weeks since he's "back on pace" or do you downgrade your projection since he's "underperformed" 2/3rds of the time?There isn't a single player on any of my rosters I think will score "around" a static number all season long. I suspect that's the same for you as well. Every week is different and somewhat unique. The player isn't unique, but his opponent and match-ups are.I don't think anyone believes there is zero volatility. That's not Modog's point. But I'm certain you have the bolded backwards. The reason we project a player to score 320 points in a season is precisely because (a) we project them to score 20 points per game (on average, not exactly 20 points every week) and (b) we project them to play 16 games. If the season was 2 games long, or 50 games long, we wouldn't still say, "This player is projected for 320 points this season." We would adjust the season-total projection based on the number of games he's expected to play, while keeping his per-game projection about the same.We don't project a player to score 20 points a week. We project a player to score 320 points for the season. If you think there is 0 volatility, then you really need to rethink playing this game.
So when we say a player is projected for 320 points for the season, what we really mean is we expect him to score around 20 points a week. If he scores 30 points in week 1, we still expect him to score 20 points per week going forward (or we may adjust this upwards, but there's no valid reasaon we'd adjust it downwards). If he has a death in the family and misses week 1, we don't assume he's still going to score 320 points over the remaining 15 games.
What do you think "mean" means? Seriously. Seriously? You're deflecting. Just quit, you're making a fool of yourself.Why are you bringing average into the discussion? A better example that is more analogous to fantasy football would be the number of hits or homeruns for a season. And, if I had a half season of new information I would probably be adjusting my end of year totals to reflect the season to date performance. That is not what we are talking about in the above example. We are talking about a good first week of a 16 week season...I'm short on time - so why don't you say what you think it means, and I'll just correct you. That will be much quicker for me. You can do so via PM to avoid embarrassment if you wish. In a short, underexplained, super quick explanation: If his mean is 20 ppg, then reversion to it would mean (haha wordplay) you expect him to score 20 ppg the rest of the way (reverting to his mean ppg performance) rather than 25 ppg. You're not looking at reversion to the mean, what you're positing is "underperformance in order to achieve the mean I originally projected."Really? Enlighten me...I don't understand how you can possibly think this. One above average week simply should not affect negatively the expected averages. He doesn't have a set amount of points. The only reasonable change in his average points that you could expect would be that he scores MORE per week, based on you underestimating his talents and being shown that in the above-expected-average week 1 performance.Also, you don't know what reversion to the mean is. Learn, then try again.Trying to get my brain around this...and I think I disagree with this line of thinking. I'm leaning more towards reversion to the mean.If Player X is expected to score 20 points/week for the season and comes out in week 1 over that number that would leave me to believe that future weeks should average below 20/week...unless you think that 25/week is the new normal which would change the answer.no, that's not how it works. ray rice is not more likely to have a bad week in the future because he had a good week this week.Having mostly all the stud players score appropriately to their costs week 1 is bound to hurt in the upcoming weeks since week 1 had no cuts. Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
Should be interesting to see how that plays out. I'd be betting there will be some huge small roster drop offs in one of the next couple weeks due to that as a number of studs will probably dud all in the same week...The fact that anyone thinks this boggles my mind. There is not set number of good/bad weeks or absolute end of season total for each player to hit out to. If you expected Rice to score 320 points on the season and 20 week 1. If he actually scored 25 in week one, you should now expect him to score 325 points on the season, not for him to score 295 points for the rest of the season.
You are wrong. If I expect Player X to average 20 points/game for the season, I expect him to score 20 x 16 or 320 points this season. If he scores 25 points in Week 1, I now expect him to score 295 points in the remaining 15 games or 19.66 points/game. That is reversion to the mean. I never suggested that his season total would be higher based off of a better Week 1. As I mentioned in my original post, if I now expect Player X to score 25 points /week then I've changed the equation and the ending season total. Also, "underperformance in order to achieve the mean I originally projected" is exactly what reversion to the mean is.
If a .300 hitter for 10 years starts his 11th season hitting .350 through 81 games, do you expect him to bat only .250 the second half, or do you expect him to bat closer to .300?
That would be me. And no need to go there.Just quit, you're making a fool of yourself.
I would leave it and assume a day off.You and the others keep avoiding this extremely simple question which will immediately show that your reasoning here is flawed:If you have a player projected for 320 points for the season, and then the length of the season is lengthened or shortened, would you leave his projected total the same, or would you adjust it based on the change to the number of games he's expected to play?
The guy who brought up coin flips used a bad example.Apparently it is as simple as a coin flips.If you think there is 0 volatility, then you really need to rethink playing this game.
The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913),[1][2] and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.
Trust me, the stats-nerds would tell you that it isn't insane at all. Don't get me wrong, baseball and football are very, very different. The ebb and flow of the baseball season mandates that players will have hot stretches and cold stretches. You can't get too wrapped-up in a hot-streak or a cold-streak for any player. At some point, however, the hot-streak or cold-streak ceases being a streak and indicates that the player has some sort of a new "norm" that wasn't initially projected. Identifying when that occurs is the source of the baseball stat-nerd debates.Well, OK, let's say you project someone to hit 40 homers in a season, approximately 1 every 4 games. He hits 10 in the first 10 games. You should either expect him to continue to hit about 1 in 4 (which would give him with about 48 for the season), or to hit more than 1 in 4 based on how he's actually performing this year. To expect him to hit just 30 homers for the rest of the season (1 in 5) because you'd originally projected him for 40 and he already hit 10 would be insane.Why are you bringing average into the discussion? A better example that is more analogous to fantasy football would be the number of hits or homeruns for a season.
In one way or another we all do. Maybe not explicitely, but you can't project a season that includes week 13 without projecting week 13. And FTR, I did in fact project week 13.Well, who attempts to project a player's performance in week 13 during the pre-season? I submit no one.This is all for simplicity purposes. No one should project an exact 20 points per game. We project 10, 15, 5, 12, 14, 8, 6 and so on...Having someone beat their first week projection shouldn't downgrade their future projections JUST BECAUSE they beat their first week projection. Some situations warrent a downgrade or upgrade but it's due to situation not simply because of an artificial requirement that if you projected someone for 1000 yards, that you have to make your weekly projections fit so they hit out to that 1000 yards.
It doesn't really have anything to do with how accurate or what your projections were or are. It has to do with one weeks performance not changing your future expectations based on that performance meeting, exceeding or falling short of your initial expectation. Even if you don't do projections out week by week in the preseason, If you project someone to rush for 1000 yards on the season and come week 1 you expect that person to rush for 80 in that game, you're implying that they'll rush for 920 in the rest of the year. If that player rushes for 100 in week 1, you should still expect/project that player to rush for 920 yards for the rest of the season. Obviously it could change if one of your assumptions changed (he's better than you thought, it's more/less of a time share than you thought, he twisted his ankle and so on) but knowing nothing other than he outgained his week1 projection by 20 yards shouldn't changed what you had originally projected going forward. The fact that your implied season projection moves from 1000 to 1020 is the volatility/range factor that you describe.There are 2 different (fundamentally different, IMO) discussions going on here. The first is about a player's total projected score for the season as a whole in light of his performance in week 1. The second is about a player's performance in week 1 exceeding a projection for that discrete event. They are not the same. Again, IMO they aren't even close to the same thing.Bigger picture, however, is how we view projections in the first place. Projections are, by their nature, imprecise. If we projected 20 from Ray Rice in week 1 and he scored 22, we would say that our projection was a good one. Similarly, if we projected 320 out of Rice for the season and he ended with 330, we would say that our projection was outstanding. Projections are really ranges, not a static number. They are best viewed as "X +/- Y%".
This isn't necessarily a sport specific concept. You're right in that because there are so many more trials in baseball you'll see these streaks come out more, but it doesn't change anything. If you believe someone is a .300 hitter, even if they start out 0 for 100 (and knowing nothing else about the situation), you still expect them to get 30 hits over their next 100 bats. If at the beginning of the year you expect them to get 200 hits over the season and halfway through they have 50, do you expect them to get closer to 150 over the next half? Or closer to 100?Trust me, the stats-nerds would tell you that it isn't insane at all. Don't get me wrong, baseball and football are very, very different. The ebb and flow of the baseball season mandates that players will have hot stretches and cold stretches. You can't get too wrapped-up in a hot-streak or a cold-streak for any player. At some point, however, the hot-streak or cold-streak ceases being a streak and indicates that the player has some sort of a new "norm" that wasn't initially projected. Identifying when that occurs is the source of the baseball stat-nerd debates.Well, OK, let's say you project someone to hit 40 homers in a season, approximately 1 every 4 games. He hits 10 in the first 10 games. You should either expect him to continue to hit about 1 in 4 (which would give him with about 48 for the season), or to hit more than 1 in 4 based on how he's actually performing this year. To expect him to hit just 30 homers for the rest of the season (1 in 5) because you'd originally projected him for 40 and he already hit 10 would be insane.Why are you bringing average into the discussion? A better example that is more analogous to fantasy football would be the number of hits or homeruns for a season.
150This isn't necessarily a sport specific concept. You're right in that because there are so many more trials in baseball you'll see these streaks come out more, but it doesn't change anything. If you believe someone is a .300 hitter, even if they start out 0 for 100 (and knowing nothing else about the situation), you still expect them to get 30 hits over their next 100 bats. If at the beginning of the year you expect them to get 200 hits over the season and halfway through they have 50, do you expect them to get closer to 150 over the next half? Or closer to 100?
I'm not ignoring it, I just don't think it has anything to do with the issue. If the season were suddenly lengthed I would adjust my season-long projections. Just like I would if the season were shortened. Neither would have any impact whatsoever on my projections for his performance in week 1. Your query has absolutely nothing to do with the discussions, which are: (1) how does a player's performance in a given week impact your overall total projected score for that player and (2) how does a player's performance in a given week compared to his projected performance for that week impact future weekly projections.If the season ended tomorrow, I would project him to have 0 points from this day forward. If the season were expanded to 32 games, I would increase my projections (but not come close to doubling them) to reflect the added point-scoring opportunities he has.None of that has anything to do with how I allow his performance in 1 game affect my overall assessment of how he will perform for the season with 1 major exception: the number of data points I have in relation to the total set. If I've seen 50% of his games after week 1, that week 1 performance will have a huge impact on my season-long projections for him. If I've seen 1/32 of his games after week 1, that week 1 performance will have absolutely no impact on my season-long projection for him.You and the others keep avoiding this extremely simple question which will immediately show that your reasoning here is flawed:If you have a player projected for 320 points for the season, and then the length of the season is lengthened or shortened, would you leave his projected total the same, or would you adjust it based on the change to the number of games he's expected to play?
That's precisely why the coin flips were a good example - because they illustrate that just because you have a trend in one direction, doesn't mean you should expect a trend in the opposite direction to "even out" the total. Read the bolded carefully and apply it to the football situation. It's a fallacy to believe that if a player does well in week 1, he is more likely to do poorly later in the season.The guy who brought up coin flips used a bad example.Apparently it is as simple as a coin flips.If you think there is 0 volatility, then you really need to rethink playing this game.The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913),[1][2] and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.
You're asking a VERY different set of questions here than the discussion at hand. Your questions about baseball are addressing larger sets of data than 1 football game.Everyday baseball players average a little over 600 PA in a season. You are throwing out 1/6th of the season stretches in your examples (and fully 1/2 of the season in some). That's the equivalent of nearly 3 games worth of football data to 8 games worth of football data.But since you asked, here is an article that serves as the dumbed-down version of some serious statisticaly analysis on those very questions you asked. The link to the heavy reading is in the article. I tried to read it once and decided to take Keri's word for it.http://www.grantland.com/blog/the-triangle/post/_/id/27171/fantasy-fiesta-when-can-we-trust-the-numbersBasically, OBP (baseball stat-nerds loathe BA with every ounce of their being) becomes reliable at 500 PA. HR rate stabilizes at 300 PA. 100 PA gives you stability with respect to swing % (percentage of pitches a batter swings at) and contact rate (percentage of pitches swung at that are put in play). Nothing else.But I am completely sincere when I say that baseball analysis is fundamentally different than football analysis.This isn't necessarily a sport specific concept. You're right in that because there are so many more trials in baseball you'll see these streaks come out more, but it doesn't change anything. If you believe someone is a .300 hitter, even if they start out 0 for 100 (and knowing nothing else about the situation), you still expect them to get 30 hits over their next 100 bats. If at the beginning of the year you expect them to get 200 hits over the season and halfway through they have 50, do you expect them to get closer to 150 over the next half? Or closer to 100?
That's exactly my point. Besides, I thought you don't project weekly performance, just season performance?I'm not ignoring it, I just don't think it has anything to do with the issue. If the season were suddenly lengthed I would adjust my season-long projections. Just like I would if the season were shortened. Neither would have any impact whatsoever on my projections for his performance in week 1.
It has everything to do with your assertion that we don't project weekly points, we project season points.Your query has absolutely nothing to do with the discussions, which are: (1) how does a player's performance in a given week impact your overall total projected score for that player and (2) how does a player's performance in a given week compared to his projected performance for that week impact future weekly projections.
Obviously, though that's missing the point.If the season ended tomorrow, I would project him to have 0 points from this day forward.
Why would you not come close to doubling his season-total projections if the length of the season was doubled? Ignore the idea that he'll be more tired with twice as many games or whatever, that misses the point. If that's your objection, then reframe the question this way: If the length of the season was cut in half, would you cut his season-total projection in half or leave it the same?If the season were expanded to 32 games, I would increase my projections (but not come close to doubling them) to reflect the added point-scoring opportunities he has.
Again, you're stuck on this idea that you project season-total stats as opposed to game-total stats. You obviously don't, even though you're unwilling to admit it. If Ray Rice is projected to run for 1600 yards in a season, that means we think that in an average game, he'll rush for about 100 yards. If you add two games to the schedule, you don't leave the season-total projection at 1600 yards, and therefore assume that he's somehow a worse RB. Similarly, if we remove two games from the schedule, you don't leave the season-total projection at 1600 yards, and therefore assume that he's somehow a better RB.None of that has anything to do with how I allow his performance in 1 game affect my overall assessment of how he will perform for the season with 1 major exception: the number of data points I have in relation to the total set. If I've seen 50% of his games after week 1, that week 1 performance will have a huge impact on my season-long projections for him. If I've seen 1/32 of his games after week 1, that week 1 performance will have absolutely no impact on my season-long projection for him.
Coin flips and rolls of the dice are discrete events. A series of athletic contests is not.That's precisely why the coin flips were a good example - because they illustrate that just because you have a trend in one direction, doesn't mean you should expect a trend in the opposite direction to "even out" the total. Read the bolded carefully and apply it to the football situation. It's a fallacy to believe that if a player does well in week 1, he is more likely to do poorly later in the season.Obviously there are meaningful differences between a fair coin and a football player, but the basic premise remains the same. If you flip a coin ten times, you expect there to be 5 heads and 5 tails. If you get heads on the first two flips, how many heads should you expect on the remaining eight flips?
Of course it is. Every play is a discrete event. Every game is a discrete event, which is a sum of discrete plays. Every season is a discrete event, which is a sum of discrete games, each of which is a sum of discrete plays. I saw above that you posted a link to some "heavy analysis" and then remarked that you tried to read it and then decided to just take the author's word for it. Do you think it's possible that maybe you're not fully understanding what we're talking about here? Because you really are taking some strange positions, and half the time the things you're saying are actually making the case for the opposite side of the argument.Coin flips and rolls of the dice are discrete events. A series of athletic contests is not.That's precisely why the coin flips were a good example - because they illustrate that just because you have a trend in one direction, doesn't mean you should expect a trend in the opposite direction to "even out" the total. Read the bolded carefully and apply it to the football situation. It's a fallacy to believe that if a player does well in week 1, he is more likely to do poorly later in the season.Obviously there are meaningful differences between a fair coin and a football player, but the basic premise remains the same. If you flip a coin ten times, you expect there to be 5 heads and 5 tails. If you get heads on the first two flips, how many heads should you expect on the remaining eight flips?
I lack the technical savvy to insert my responses. Sorry.I don't attempt to project weekly totals pre-season. I do attempt to project weekly totals each week. In other words, this week I attempt to project Rice's performance THIS WEEK, but not in week 13. I do try to project his rest-of-season performance in the context of trade analysis, but that tends to get into really soft assessment of players (I think Player X will see an increased roll late, I think Player Y has really good play-off matchups, etc.).I would never double a RB's season-long projections because the season is twice as long. To do so would be insanity. RBs not only carry the continued heightened risk of injury, but also the reality that a reasonably intelligent HC would recognize the need to give his starting RB fewer touches per game all season long in an effort to have a RB who isn't worn-down by the end of the season. I would come much closer to doubling a QB's projections.Finally if I project Rice to have 1600 yards rushing for the season, I am NOT projecting him to rush for 100 yards per game. To do so would be to totally ignore opponents. Assume I project 1600 yards out of Rice (I don't, but assume I do). Based on what I saw last week, I'm projecting less than 100 yards this week against Philly. If he hits 100 I would be pleasantly surprised. The fact that he exceed 100 yards by 22% against a Bengals defense that looked horrible doesn't suddenly make me re-think my season-long projections. Come week 4 or 5 if he's 22% ahead of my season-long projected pace that will change. 1/16th isn't enough data to make me alter my projection.If Rice struggles this week against what appears to be a stout Philly Defense, will you reduce your season-long projection for him? If so, I really need to get into a league with you.That's exactly my point. Besides, I thought you don't project weekly performance, just season performance?I'm not ignoring it, I just don't think it has anything to do with the issue. If the season were suddenly lengthed I would adjust my season-long projections. Just like I would if the season were shortened. Neither would have any impact whatsoever on my projections for his performance in week 1.It has everything to do with your assertion that we don't project weekly points, we project season points.Your query has absolutely nothing to do with the discussions, which are: (1) how does a player's performance in a given week impact your overall total projected score for that player and (2) how does a player's performance in a given week compared to his projected performance for that week impact future weekly projections.Obviously, though that's missing the point.If the season ended tomorrow, I would project him to have 0 points from this day forward.Why would you not come close to doubling his season-total projections if the length of the season was doubled? Ignore the idea that he'll be more tired with twice as many games or whatever, that misses the point. If that's your objection, then reframe the question this way: If the length of the season was cut in half, would you cut his season-total projection in half or leave it the same?If the season were expanded to 32 games, I would increase my projections (but not come close to doubling them) to reflect the added point-scoring opportunities he has.Again, you're stuck on this idea that you project season-total stats as opposed to game-total stats. You obviously don't, even though you're unwilling to admit it. If Ray Rice is projected to run for 1600 yards in a season, that means we think that in an average game, he'll rush for about 100 yards. If you add two games to the schedule, you don't leave the season-total projection at 1600 yards, and therefore assume that he's somehow a worse RB. Similarly, if we remove two games from the schedule, you don't leave the season-total projection at 1600 yards, and therefore assume that he's somehow a better RB.None of that has anything to do with how I allow his performance in 1 game affect my overall assessment of how he will perform for the season with 1 major exception: the number of data points I have in relation to the total set. If I've seen 50% of his games after week 1, that week 1 performance will have a huge impact on my season-long projections for him. If I've seen 1/32 of his games after week 1, that week 1 performance will have absolutely no impact on my season-long projection for him.
The size of the data set is irrelavant. If i believe someone is a .300 hitter, I expect him to get a hit in 30% of his ABs, not matter if it's 10 ABs, 100, 1000, or 1,000,000. Over 600 AB's at the beginning of the year I'd expect him to get 180 hits. With 2 games to play (let's say for simplicity he'll get 10 AB's over the 2 games), I expect him to get 3 hits, regardless if he has 177 hits prior to it, or 207, or 97. WHY? Because I believe he's a .300 hitter. If for some reason I change my mind and don't believe he is what I thought he was, that's one thing. But if nothing else changes I still expect him to get 3 hits over the next 10 ABs. People have good and bad games and good and bad seasons. But if say a player has hit .300 over the last 3 seasons, and struggles for 3/4 of the year, I don't expect him to hit .400 over the next 1/4 season to "make up" for his poor beginning. You can alter a players projection (in any sport) because you have some reason to believe you were incorrect in your initial assumption. For example, he's a better player than you thought, he's got a nagging injury, it's in a time share, ect. But you can't alter a players projection just to "make up" for a hot/cold streak. If he's truly a .300 hitter over a long enough amount of AB's he'll trend toward 300. But with a season, if he starts 10 for 100, you still only expect him to get 120 hits over his next 400 ABs (30%), not 140 hits.You're asking a VERY different set of questions here than the discussion at hand. Your questions about baseball are addressing larger sets of data than 1 football game.Everyday baseball players average a little over 600 PA in a season. You are throwing out 1/6th of the season stretches in your examples (and fully 1/2 of the season in some). That's the equivalent of nearly 3 games worth of football data to 8 games worth of football data.This isn't necessarily a sport specific concept. You're right in that because there are so many more trials in baseball you'll see these streaks come out more, but it doesn't change anything. If you believe someone is a .300 hitter, even if they start out 0 for 100 (and knowing nothing else about the situation), you still expect them to get 30 hits over their next 100 bats. If at the beginning of the year you expect them to get 200 hits over the season and halfway through they have 50, do you expect them to get closer to 150 over the next half? Or closer to 100?
Thanks, I'll read this when I get a chance.But since you asked, here is an article that serves as the dumbed-down version of some serious statisticaly analysis on those very questions you asked. The link to the heavy reading is in the article. I tried to read it once and decided to take Keri's word for it.http://www.grantland.com/blog/the-triangle/post/_/id/27171/fantasy-fiesta-when-can-we-trust-the-numbersBasically, OBP (baseball stat-nerds loathe BA with every ounce of their being) becomes reliable at 500 PA. HR rate stabilizes at 300 PA. 100 PA gives you stability with respect to swing % (percentage of pitches a batter swings at) and contact rate (percentage of pitches swung at that are put in play). Nothing else.But I am completely sincere when I say that baseball analysis is fundamentally different than football analysis.
I get the sense that some of the people arguing against this concept actually agree with it, they just don't know that they are agreeing with it.Of course it is. Every play is a discrete event. Every game is a discrete event, which is a sum of discrete plays. Every season is a discrete event, which is a sum of discrete games, each of which is a sum of discrete plays. I saw above that you posted a link to some "heavy analysis" and then remarked that you tried to read it and then decided to just take the author's word for it. Do you think it's possible that maybe you're not fully understanding what we're talking about here? Because you really are taking some strange positions, and half the time the things you're saying are actually making the case for the opposite side of the argument.Coin flips and rolls of the dice are discrete events. A series of athletic contests is not.That's precisely why the coin flips were a good example - because they illustrate that just because you have a trend in one direction, doesn't mean you should expect a trend in the opposite direction to "even out" the total. Read the bolded carefully and apply it to the football situation. It's a fallacy to believe that if a player does well in week 1, he is more likely to do poorly later in the season.
Obviously there are meaningful differences between a fair coin and a football player, but the basic premise remains the same.
If you flip a coin ten times, you expect there to be 5 heads and 5 tails. If you get heads on the first two flips, how many heads should you expect on the remaining eight flips?
It's a very good example for refuting this very simple statement:The guy who brought up coin flips used a bad example.Apparently it is as simple as a coin flips.If you think there is 0 volatility, then you really need to rethink playing this game.The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913),[1][2] and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.
All of this other crap being discussed, while making for an interesting discussion for some, is completely irrelevant when considering the OPs original comment.Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
If you project Rice for 1500 rushing yards, and after 15 games he has 1300, do you project him for 200 in game 16? If he's at 1490 do you project him for 10 yards? Obviously not.Your expectation for what the year-end totals will be is necessarily modified by what has already occurred. That doesn't mean that your expectation for next week should be modified by what has already occurred, unless it indicates that your initial analysis was incorrect. The fact that a player does poorly in a game, or a series of games, certainly should never lead you to upgrade your expectations for his performance; similarly, the fact that he performs well shouldn't lead you to downgrade him.If Rice struggles this week against what appears to be a stout Philly Defense, will you reduce your season-long projection for him? If so, I really need to get into a league with you.
But now you're introducing a host of variables that aren't applicable to the original poster's point. I agree that Ray Rice is likely to do better against bad defenses and worse against good defenses. That partly explains the variance in weekly scoring.I think Modog's right that a lot of us are actually probably in agreement, but we're talking past each other or talking about different things. Let me refer you back to the OP that started the whole discussion:I lack the technical savvy to insert my responses. Sorry.I don't attempt to project weekly totals pre-season. I do attempt to project weekly totals each week. In other words, this week I attempt to project Rice's performance THIS WEEK, but not in week 13. I do try to project his rest-of-season performance in the context of trade analysis, but that tends to get into really soft assessment of players (I think Player X will see an increased roll late, I think Player Y has really good play-off matchups, etc.).That's exactly my point. Besides, I thought you don't project weekly performance, just season performance?I'm not ignoring it, I just don't think it has anything to do with the issue. If the season were suddenly lengthed I would adjust my season-long projections. Just like I would if the season were shortened. Neither would have any impact whatsoever on my projections for his performance in week 1.It has everything to do with your assertion that we don't project weekly points, we project season points.Your query has absolutely nothing to do with the discussions, which are: (1) how does a player's performance in a given week impact your overall total projected score for that player and (2) how does a player's performance in a given week compared to his projected performance for that week impact future weekly projections.Obviously, though that's missing the point.If the season ended tomorrow, I would project him to have 0 points from this day forward.Why would you not come close to doubling his season-total projections if the length of the season was doubled? Ignore the idea that he'll be more tired with twice as many games or whatever, that misses the point. If that's your objection, then reframe the question this way: If the length of the season was cut in half, would you cut his season-total projection in half or leave it the same?If the season were expanded to 32 games, I would increase my projections (but not come close to doubling them) to reflect the added point-scoring opportunities he has.Again, you're stuck on this idea that you project season-total stats as opposed to game-total stats. You obviously don't, even though you're unwilling to admit it. If Ray Rice is projected to run for 1600 yards in a season, that means we think that in an average game, he'll rush for about 100 yards. If you add two games to the schedule, you don't leave the season-total projection at 1600 yards, and therefore assume that he's somehow a worse RB. Similarly, if we remove two games from the schedule, you don't leave the season-total projection at 1600 yards, and therefore assume that he's somehow a better RB.None of that has anything to do with how I allow his performance in 1 game affect my overall assessment of how he will perform for the season with 1 major exception: the number of data points I have in relation to the total set. If I've seen 50% of his games after week 1, that week 1 performance will have a huge impact on my season-long projections for him. If I've seen 1/32 of his games after week 1, that week 1 performance will have absolutely no impact on my season-long projection for him.
I would never double a RB's season-long projections because the season is twice as long. To do so would be insanity. RBs not only carry the continued heightened risk of injury, but also the reality that a reasonably intelligent HC would recognize the need to give his starting RB fewer touches per game all season long in an effort to have a RB who isn't worn-down by the end of the season. I would come much closer to doubling a QB's projections.
Finally if I project Rice to have 1600 yards rushing for the season, I am NOT projecting him to rush for 100 yards per game. To do so would be to totally ignore opponents. Assume I project 1600 yards out of Rice (I don't, but assume I do). Based on what I saw last week, I'm projecting less than 100 yards this week against Philly. If he hits 100 I would be pleasantly surprised. The fact that he exceed 100 yards by 22% against a Bengals defense that looked horrible doesn't suddenly make me re-think my season-long projections. Come week 4 or 5 if he's 22% ahead of my season-long projected pace that will change. 1/16th isn't enough data to make me alter my projection.
If Rice struggles this week against what appears to be a stout Philly Defense, will you reduce your season-long projection for him? If so, I really need to get into a league with you.
He seems to be implying a common idea here: That Player X is bound to have 2-3 bad weeks this year, and it would've been better for Player X's owners to get one of those bad weeks out of the way in week 1 when there were no cuts. That's a fallacy. Whether or not he has a good or bad week in week 1 does not increase or reduce the number of bad weeks he's expected to have in weeks 2-16. I get your point that maybe you had already penciled him in for a good game in week 1 (because of a good matchup) and a bad game in week 5 (because of a tough matchup). That's different.Having mostly all the stud players score appropriately to their costs week 1 is bound to hurt in the upcoming weeks since week 1 had no cuts. Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
Should be interesting to see how that plays out. I'd be betting there will be some huge small roster drop offs in one of the next couple weeks due to that as a number of studs will probably dud all in the same week...
I respect your posts a great deal, but on this I think you are plain wrong. "Discrete" means unrelated and independent from one anything else. That is completely and utterly not the case in any human contest. Ray Rice is a better RB than Blount, for example. Philly's defense is better than Tennessee's. A running play against a 9-man front is not the same as a running play against a 7-man front. The results of a play can be (and oftentimes are) impacted by the previous play or any number of previous plays. By definition, they are not "discrete".The outcome of the flipping of a coin is "discrete". The outcome of the previous coin flip has absolutely no impact on the next coin flip. 6 heads does not mean that tails is "due" or that heads is "hot". The flipper doesn't impact results. The location of the flipping doesn't impact results. The gameplan of the flipper doesn't impact results.A player's performance over the course of the season can be a clear indicator of his performance for the remainder of the season. That isn't the question. The question is: When do we have enough data to alter our initial projection of a player's performance. 1/16th of the season is not enough IMO to alter my projection. Maybe it is for you, if so I disagree a great deal. I've never engaged in any statistical analysis of the issue in the context of football, but I certainly have in baseball (I've done it far longer and was far more serious about it in my younger days). I know when the trend becomes meaningful in baseball, at least statistically. I don't have that historical data for football, but I don't think it's particularly helpful. Usage, gameplan, injuries, teammates, and opponents are far more impactful and random in football than they are in baseball. Everything is far more interconnected in football than baseball. For example, we know statistically the impact of batting 7th in the lineup vs. 2nd in the lineup in baseball. No such analysis, to my knowledge, exists in football, nor would I think it would have any real value. The ability of the SS (fielding or offensively) has no impact on Josh Hamilton's performance, for example.In other words, I think it's a much "softer" analysis than baseball.Of course it is. Every play is a discrete event. Every game is a discrete event, which is a sum of discrete plays. Every season is a discrete event, which is a sum of discrete games, each of which is a sum of discrete plays. I saw above that you posted a link to some "heavy analysis" and then remarked that you tried to read it and then decided to just take the author's word for it. Do you think it's possible that maybe you're not fully understanding what we're talking about here? Because you really are taking some strange positions, and half the time the things you're saying are actually making the case for the opposite side of the argument.
Mathematics and statistics aside, I think the original point was that even studs have down weeks, and it's just kind of better to have them happen on a no-cut week. I mean, sure it's poossible that some stud will be studly all 16 weeks, but it never really happens. It's kind of pointless to think about though, because if someone's studs did have a down week, they'd be wondering if they were busts.It's a very good example for refuting this very simple statement:The guy who brought up coin flips used a bad example.Apparently it is as simple as a coin flips.If you think there is 0 volatility, then you really need to rethink playing this game.The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913),[1][2] and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.All of this other crap being discussed, while making for an interesting discussion for some, is completely irrelevant when considering the OPs original comment.Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
That's missing the point. After week 2, I'm no longer projecting season-long totals for Rice. I'm projecting totals for weeks 3-16, since weeks 1 and 2 have already occurred. If he struggles in week 2, I may reduce my projections for weeks 3-16 because I think he's not quite as good as I originally thought, or I may leave them the same becuase I recognize he had a tough matchup and is still a great RB. But I'm certainly not going to increase his rest-of-season projections due to a bad week, which is effectively what some on the other side are implying you have to do.If Rice struggles this week against what appears to be a stout Philly Defense, will you reduce your season-long projection for him? If so, I really need to get into a league with you.
Thanks for the compliment, but I respectfully disagree. That is not what "discrete" means. I can refer to you any number of sources on the web for an appropriate definition.I respect your posts a great deal, but on this I think you are plain wrong. "Discrete" means unrelated and independent from one anything else.Of course it is. Every play is a discrete event. Every game is a discrete event, which is a sum of discrete plays. Every season is a discrete event, which is a sum of discrete games, each of which is a sum of discrete plays. I saw above that you posted a link to some "heavy analysis" and then remarked that you tried to read it and then decided to just take the author's word for it. Do you think it's possible that maybe you're not fully understanding what we're talking about here? Because you really are taking some strange positions, and half the time the things you're saying are actually making the case for the opposite side of the argument.
You mean, by definition, they're not independent. And I agree with that. I also agree with the gist of the rest of your post that at some point you may want to revise your original projections based on what you're seeing, and that week 1 is probably too early to do so. But whenever you decide to make that revision, you would never revise them in the opposite direction, which is what is implied by the fallacy that if a player doesn't get one of his bad weeks out of the way early, he's going to have them later in the season. If he's doing great, then you either leave his projection the same or you increase it because he's demonstrating that he's a great player. You don't say, "I had him projected for three bad games during the season, and now the season is half over and he hasn't had a single bad game yet, so I still expect him to have three bad games in the final eight weeks."That is completely and utterly not the case in any human contest. Ray Rice is a better RB than Blount, for example. Philly's defense is better than Tennessee's. A running play against a 9-man front is not the same as a running play against a 7-man front. The results of a play can be (and oftentimes are) impacted by the previous play or any number of previous plays. By definition, they are not "discrete".
This is as about as simple as I think anyone can put it. If there is still disagreement about it, I don't think those people will ever understand.That's missing the point. After week 2, I'm no longer projecting season-long totals for Rice. I'm projecting totals for weeks 3-16, since weeks 1 and 2 have already occurred. If he struggles in week 2, I may reduce my projections for weeks 3-16 because I think he's not quite as good as I originally thought, or I may leave them the same becuase I recognize he had a tough matchup and is still a great RB. But I'm certainly not going to increase his rest-of-season projections due to a bad week, which is effectively what some on the other side are implying you have to do.If Rice struggles this week against what appears to be a stout Philly Defense, will you reduce your season-long projection for him? If so, I really need to get into a league with you.
Do you also believe it is completely irrational to expect a good week along the way?But I'm certainly not going to increase his rest-of-season projections due to a bad week, which is effectively what some on the other side are implying you have to do.
Right - if a stud has a bad week in week 1, no one's thinking "Whew! Glad he got that out of the way, now he'll be great the rest of the year!" You either think he'll do the same things in weeks 2-16 that you always thought he would, or you worry that he's a bust. In either case, a bad game in week 1 is no reason to be more optimistic about the rest of the season.It's the same idea in reverse - just because he has a good week in week 1, no one should be thinking, "Damn, now he's still due to have a bad week, I wish he'd gotten it out of the way this week when there were no cuts." You either think he'll do the same things in weeks 2-16 that you always thought he would, or you think he might be even better than you originally thought he would. In either case, a good game in week 1 is no reason to be more pessimistic about the rest of the season.It's kind of pointless to think about though, because if someone's studs did have a down week, they'd be wondering if they were busts.
No, and it's not even clear what that has to do with my post you quoted. I think you might be arguing against a point that no one's even trying to defend.Do you also believe it is completely irrational to expect a good week along the way?But I'm certainly not going to increase his rest-of-season projections due to a bad week, which is effectively what some on the other side are implying you have to do.
I try to stick with things I do best, which is be completely oblivious to my surroundings.No, and it's not even clear what that has to do with my post you quoted. I think you might be arguing against a point that no one's even trying to defend.Do you also believe it is completely irrational to expect a good week along the way?But I'm certainly not going to increase his rest-of-season projections due to a bad week, which is effectively what some on the other side are implying you have to do.
The more I reread the original post, the more I think he meant what I wrote above. There's a big difference between thinking that having a good week guarantees a bad week later on, and simply realizing that even studs have bad weeks and being happy that one of them happened on a no-cut week.Mathematics and statistics aside, I think the original point was that even studs have down weeks, and it's just kind of better to have them happen on a no-cut week. I mean, sure it's poossible that some stud will be studly all 16 weeks, but it never really happens. It's kind of pointless to think about though, because if someone's studs did have a down week, they'd be wondering if they were busts.
Matt Ryan -think he can finish top 3 this year
Jake Locker -insurance and thnk he can be decent at times
Arian Foster -wanted two stud ppr backs
Darren McFadden -see foster, hopefully the year he finishes top 5
Peyton Hillis -thought he was good value after grabbing 2 studs
David Wilson -thought he could get some carries /catches, hopefully will rebound from week 1
Cedric Benson -thought he was good value for the $$$
Taiwan Jones -mcfadden insurance
Julio Jones -wanted 3 good top WR, could finish WR1
Antonio Brown- good receiver looking for 80-1200-8
Torrey Smith -can have some huge games, think he can go 70-1100-8
Brandon LaFell - think he has a nice year (60-900-6, makes a nice flex play)
Steve Smith -cheap flyer, not encouraged by week 1
Josh Gordon -see smith
Devin Hester -see smith
Vernon Davis -wanted a top TE but didnt want to pay for the elite guys
Jared Cook -think he is under rated and could finish top 10
Lance Kendricks -flyer, think he is good for 50-500
Matt Prater
Shaun Suisham
Justin Medlock
Atlanta Falcons
New York Jets
New Orleans Saints
Interesting that this seems to be a yearly discussion, though certainly an intriguing one.From Last Year looking at QB performance in week 1 vs. rest of season as a result of Henne's monster performance:No, and it's not even clear what that has to do with my post you quoted. I think you might be arguing against a point that no one's even trying to defend.Do you also believe it is completely irrational to expect a good week along the way?But I'm certainly not going to increase his rest-of-season projections due to a bad week, which is effectively what some on the other side are implying you have to do.
Probably true. Although I do believe that some people certainly believe the latter.The more I reread the original post, the more I think he meant what I wrote above. There's a big difference between thinking that having a good week guarantees a bad week later on, and simply realizing that even studs have bad weeks and being happy that one of them happened on a no-cut week.Mathematics and statistics aside, I think the original point was that even studs have down weeks, and it's just kind of better to have them happen on a no-cut week. I mean, sure it's poossible that some stud will be studly all 16 weeks, but it never really happens. It's kind of pointless to think about though, because if someone's studs did have a down week, they'd be wondering if they were busts.
I have no disagreement with this. A 25 point performance does not mean there will be a 15 point performance to "average it out". Assuming our 320 point projection was 100% spot-on accurate, he could have 5 consecutive 19 point outings or he could sit-out Week 17 because Baltimore has clinched the #1 seed. I do not, however, mean to give the impression than I think that will necessarily happen. I don't think I indicated that I agreed with that initial post. There isn't some sort of "since X, then Y" correllation. It's just 1 week's performance, nothing more nothing less.I think Modog's right that a lot of us are actually probably in agreement, but we're talking past each other or talking about different things. Let me refer you back to the OP that started the whole discussion:
He seems to be implying a common idea here: That Player X is bound to have 2-3 bad weeks this year, and it would've been better for Player X's owners to get one of those bad weeks out of the way in week 1 when there were no cuts. That's a fallacy. Whether or not he has a good or bad week in week 1 does not increase or reduce the number of bad weeks he's expected to have in weeks 2-16. I get your point that maybe you had already penciled him in for a good game in week 1 (because of a good matchup) and a bad game in week 5 (because of a tough matchup). That's different.Having mostly all the stud players score appropriately to their costs week 1 is bound to hurt in the upcoming weeks since week 1 had no cuts. Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
Should be interesting to see how that plays out. I'd be betting there will be some huge small roster drop offs in one of the next couple weeks due to that as a number of studs will probably dud all in the same week...
Also note that I agree with his final point - there probably will be some huge small roster drop offs in the next couple of weeks due to high-priced players underperforming. But that's not because they all did well in week 1, which is what he's implying.
Bottom line Based on this data, not only is a good week 1 performance not predictive of future poor performance, it is slightly predictive of even more good performances (relative to pre-season projections). And Iggy was right.
I have no disagreement with this. A 25 point performance does not mean there will be a 15 point performance to "average it out". Assuming our 320 point projection was 100% spot-on accurate, he could have 5 consecutive 19 point outings or he could sit-out Week 17 because Baltimore has clinched the #1 seed. I do not, however, mean to give the impression than I think that will necessarily happen. I don't think I indicated that I agreed with that initial post. There isn't some sort of "since X, then Y" correllation. It's just 1 week's performance, nothing more nothing less.I think Modog's right that a lot of us are actually probably in agreement, but we're talking past each other or talking about different things. Let me refer you back to the OP that started the whole discussion:
He seems to be implying a common idea here: That Player X is bound to have 2-3 bad weeks this year, and it would've been better for Player X's owners to get one of those bad weeks out of the way in week 1 when there were no cuts. That's a fallacy. Whether or not he has a good or bad week in week 1 does not increase or reduce the number of bad weeks he's expected to have in weeks 2-16. I get your point that maybe you had already penciled him in for a good game in week 1 (because of a good matchup) and a bad game in week 5 (because of a tough matchup). That's different.Having mostly all the stud players score appropriately to their costs week 1 is bound to hurt in the upcoming weeks since week 1 had no cuts. Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
Should be interesting to see how that plays out. I'd be betting there will be some huge small roster drop offs in one of the next couple weeks due to that as a number of studs will probably dud all in the same week...
Also note that I agree with his final point - there probably will be some huge small roster drop offs in the next couple of weeks due to high-priced players underperforming. But that's not because they all did well in week 1, which is what he's implying.
According to that Marion Webster chick: 1: constituting a separate entity : individually distinct <several discrete sections> 2a : consisting of distinct or unconnected elements : noncontinuous b : taking on or having a finite or countably infinite number of values <discrete probabilities> <a discrete random variable>Related to DISCRETESynonyms: detached, disconnected, separate, free, freestanding, single, unattached, unconnectedAntonyms: attached, connected, joined, linkedThanks for the compliment, but I respectfully disagree. That is not what "discrete" means. I can refer to you any number of sources on the web for an appropriate definition.
And I left out another possibility (the most likely IMO): it could just be that Cincy's defense isn't very good and everyone who drank that Kool-Aid during the offseason is looking foolish.I have no disagreement with this. A 25 point performance does not mean there will be a 15 point performance to "average it out". Assuming our 320 point projection was 100% spot-on accurate, he could have 5 consecutive 19 point outings or he could sit-out Week 17 because Baltimore has clinched the #1 seed. I do not, however, mean to give the impression than I think that will necessarily happen. I don't think I indicated that I agreed with that initial post. There isn't some sort of "since X, then Y" correllation. It's just 1 week's performance, nothing more nothing less.I think Modog's right that a lot of us are actually probably in agreement, but we're talking past each other or talking about different things. Let me refer you back to the OP that started the whole discussion:
He seems to be implying a common idea here: That Player X is bound to have 2-3 bad weeks this year, and it would've been better for Player X's owners to get one of those bad weeks out of the way in week 1 when there were no cuts. That's a fallacy. Whether or not he has a good or bad week in week 1 does not increase or reduce the number of bad weeks he's expected to have in weeks 2-16. I get your point that maybe you had already penciled him in for a good game in week 1 (because of a good matchup) and a bad game in week 5 (because of a tough matchup). That's different.Having mostly all the stud players score appropriately to their costs week 1 is bound to hurt in the upcoming weeks since week 1 had no cuts. Since those players will roughly score X amount of the year, one of their sub par weeks was not gotten out of the way in the no cut week 1...
Should be interesting to see how that plays out. I'd be betting there will be some huge small roster drop offs in one of the next couple weeks due to that as a number of studs will probably dud all in the same week...
Also note that I agree with his final point - there probably will be some huge small roster drop offs in the next couple of weeks due to high-priced players underperforming. But that's not because they all did well in week 1, which is what he's implying.![]()
Discrete has a very specific meaning in mathematics. What you were alluding to earlier had nothing to do with discrete-ness, you were describing independence. They're two separate concepts.According to that Marion Webster chick: 1: constituting a separate entity : individually distinct <several discrete sections> 2a : consisting of distinct or unconnected elements : noncontinuous b : taking on or having a finite or countably infinite number of values <discrete probabilities> <a discrete random variable>Related to DISCRETESynonyms: detached, disconnected, separate, free, freestanding, single, unattached, unconnectedAntonyms: attached, connected, joined, linkedThanks for the compliment, but I respectfully disagree. That is not what "discrete" means. I can refer to you any number of sources on the web for an appropriate definition.