What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

The Curious Case of 350+ Carry Backs... (1 Viewer)

cobalt_27

Footballguy
Came across an interesting set of stats that I'm sure have been reported before in one way or another. But, in looking at the history of NFL RBs with 350+ carries and how much work they get in year N+1, it seems that it's twice as likely that this RB will carry the ball < 300 times the follow-up year. From the data I have, only 10 RBs (12 instances) have equaled or surpassed the 350 carry mark in Year N+1 (Mean=381; Median=377). By contrast, 24 RBs carried the ball less than 300 times in Year N+1 (Mean=197; Median=214)

Num Name Year Rush RushYr2

1. 1979 Campbell 368 373

2. 1980 Campbell 373 361

3. 1983 Dickerson 390 379

4. 1984 Riggs 353 397

5. 1984 Wilder 407 365

6. 1991 Smith 365 373

7. 1994 Smith 368 377

8. 1997 Davis 369 392

9. 1998 Martin 369 367

10. 1999 James 369 387

11. 2002 Williams 383 392

12. 2004 Alexander 353 370

Num Name Year Rush RushYr2

1. 1981 Campbell 361 157

2. 1981 Rogers 378 122

3. 1984 Dickerson 379 292

4. 1985 Allen 380 208

5. 1985 Wilder 365 190

6. 1986 Dickerson 404 223

7. 1988 Walker 361 81

8. 1989 Okoye 370 245

9. 1992 Foster 390 177

10. 1992 Smith 373 283

11. 1993 Thomas 355 287

12. 1996 Watters 353 285

13. 1998 Anderson 410 19

14. 1998 Davis 392 67

15. 2000 Bettis 355 225

16. 2000 James 387 151

17. 2001 Davis 356 207

18. 2003 Green 355 259

19. 2003 Lewis 387 235

20. 2003 McAllister 351 269

21. 2003 Williams 392 168

22. 2004 Martin 371 220

23. 2005 Alexander 370 252

24. 2005 Portis 352 127

 
Last edited by a moderator:
I think Turner will have less just because the addition of Gonzo into the corral and the stated interest to get Norwood the ball more specifically screens and shovels....

 
Are you making the case that there's a correlation?
Well, I don't think there's a question that there is a correlation. I just don't think we know the reason for it. It's not like the follow-up year's numbers are even close on those 24 guys who followed-up with sub-par years (median 214 carries). Do I believe that it's a result of wear-and-tear, per se? Perhaps. What's your take?
 
I think Turner will have less just because the addition of Gonzo into the corral and the stated interest to get Norwood the ball more specifically screens and shovels....
As a frustrated Norwood owner, I've been hearing this "stated interest" each of the last couple of offseasons. I doubt they rely on him any more heavily this year than last. I also doubt you'll see that much of an uptick in PaAtt out of ATL this year, even with the addition of Gonzo. Probably will just see Ryan's Comp% increase to about the 63-64 range.
 
We have this discussion every year. And the question becomes is there that much difference between 350 carries and say 340. How about 330?

Bottom line, IMO the three main reasons to be concerned about high carry backs are: 1) clearly given their big workload they stayed exceedinly healthy to do so and RB get dinged up a fair amount. The law of averages suggests that staying almost perfectly healthy is unlikely. 2) To get that big a workload, the OL and team as a whole also had to stay healthy for all if not most of the season. Again, the law of averages suggests that having a mostly healthy offense (and sometimes defense) is unlikely. 3) It awfully hard to compile that many carries in a season due to game conditions in the first place. The same team could be behind more and have to pass. That extra possession before or after the half may not happen again. The fumble that bounced their way won't this year. Maybe the team faced all poor rushing defenses that year but won't this year. Basically, a lot of variables could change making that intense a workload unlikely.

Obviously, a few backs have pieced together back-to-back heavy workload seasons (and some several in a row). While unproven and very unscientific, if a RB has x% a chance of getting hurt on any given play, the more times he carries the ball the more likely he is to get hurt.

I wouldn't shy away altogether on any of these backs, only I would temper my enthusiam as to what they might do with a smaller workload and be happy if they get 300 carries again this year.

 
We have this discussion every year. And the question becomes is there that much difference between 350 carries and say 340. How about 330?Bottom line, IMO the three main reasons to be concerned about high carry backs are: 1) clearly given their big workload they stayed exceedinly healthy to do so and RB get dinged up a fair amount. The law of averages suggests that staying almost perfectly healthy is unlikely. 2) To get that big a workload, the OL and team as a whole also had to stay healthy for all if not most of the season. Again, the law of averages suggests that having a mostly healthy offense (and sometimes defense) is unlikely. 3) It awfully hard to compile that many carries in a season due to game conditions in the first place. The same team could be behind more and have to pass. That extra possession before or after the half may not happen again. The fumble that bounced their way won't this year. Maybe the team faced all poor rushing defenses that year but won't this year. Basically, a lot of variables could change making that intense a workload unlikely.Obviously, a few backs have pieced together back-to-back heavy workload seasons (and some several in a row). While unproven and very unscientific, if a RB has x% a chance of getting hurt on any given play, the more times he carries the ball the more likely he is to get hurt.I wouldn't shy away altogether on any of these backs, only I would temper my enthusiam as to what they might do with a smaller workload and be happy if they get 300 carries again this year.
I completely agree with all of this. History would suggest that you have a coin-flip chance (~54%) of getting 300 carries in Year 2 following a 350 carry season. That's ok with me, I suppose.
 
We have this discussion every year. And the question becomes is there that much difference between 350 carries and say 340. How about 330?
While I don't necessarily give any credence to the theory we're talking about here, the fact that it always comes down to this is a bit annoying.You could make the same argument about any study that is made here. There always has to be some cutoff, and it's always going to be some arbitrary number between which being just over and just under it is going to make very little difference. So why is it this study that always gets reamed for it?How come when someone says "this team is 12-0 when their running back runs for 100 yards or more", we don't get slews of people complaining about how the difference between running for 100 yards vs. running for 99 yards isn't really going to have any affect?
 
Are you making the case that there's a correlation?
Well, I don't think there's a question that there is a correlation.
Uh....yeah, there is --- I just asked it.Honestly, though, I don't really have an opinion on this in particular, but I'm always skeptical when people start throwing numbers around. You can pick and choose numbers to paint any kind of picture, which is why I specifically asked if you thought there was a direct correlation.I have no idea whether there's a correlation, so I'm not disputing it, but what if I was to toss this out as a hypothetical, as illustration of another way of looking at it:75% of running backs who get 100+ carries lose some substantial amount of time due to injury in year n+1.I'm mostly using that 100 carry number in this made up example as an artificial way of weeding out a lot of guys at the bottom of the bench who don't see enough playing time to have a substantial chance at getting injured.Or, if you really wanted to get nutty with it, we could concoct an injury rate per carry, but that's a little complicated for a simple illustration.So, let's just say my illustration were true.Each year, 3 of 4 backs miss substantial time, while 1 of 4 have the opportunity to rack up their normal workload.The following year, 3 of 4 of these healthy 25% would then fall prey to injury, on average, while 25% of the healthy 25% would continue to remain healthy.People looking to make some kind of case about high carry backs would be exclusively looking at the healthy 25% because that's the only way those guys get so many carries, but they would treat this minority group as a sample size of their own, and draw the conclusion that 75% of them became injured in year n+1, stating that without question there was a correlation.In this particular illustration, there is actually no correlation between a high # of carries and injury in year n+1, and while most backs couldn't repeat their workload, they don't have any greater chance of injury than any other back in the nfl.If we wanted to go back and revisit the injury rate as a function of carries, we could make the case that more carries put the back at greater injury risk, thus tending to exert downward pressure on higher carry figures, but then would you really prefer to draft a back who would be getting few carries just to try and dodge injury?Again, I'm just playing devil's advocate here.What I think would be an interesting study would be to tally up all the backs who failed to get, let's just say 150 or 200 carries in year n (a relatively light number), and find out how many of those achieved 300 or 350 carries the following year.
 
Are you making the case that there's a correlation?
Well, I don't think there's a question that there is a correlation.
Uh....yeah, there is --- I just asked it.Honestly, though, I don't really have an opinion on this in particular, but I'm always skeptical when people start throwing numbers around. You can pick and choose numbers to paint any kind of picture, which is why I specifically asked if you thought there was a direct correlation.

I have no idea whether there's a correlation, so I'm not disputing it, but what if I was to toss this out as a hypothetical, as illustration of another way of looking at it:

75% of running backs who get 100+ carries lose some substantial amount of time due to injury in year n+1.

I'm mostly using that 100 carry number in this made up example as an artificial way of weeding out a lot of guys at the bottom of the bench who don't see enough playing time to have a substantial chance at getting injured.

Or, if you really wanted to get nutty with it, we could concoct an injury rate per carry, but that's a little complicated for a simple illustration.

So, let's just say my illustration were true.

Each year, 3 of 4 backs miss substantial time, while 1 of 4 have the opportunity to rack up their normal workload.

The following year, 3 of 4 of these healthy 25% would then fall prey to injury, on average, while 25% of the healthy 25% would continue to remain healthy.

People looking to make some kind of case about high carry backs would be exclusively looking at the healthy 25% because that's the only way those guys get so many carries, but they would treat this minority group as a sample size of their own, and draw the conclusion that 75% of them became injured in year n+1, stating that without question there was a correlation.

In this particular illustration, there is actually no correlation between a high # of carries and injury in year n+1, and while most backs couldn't repeat their workload, they don't have any greater chance of injury than any other back in the nfl.

If we wanted to go back and revisit the injury rate as a function of carries, we could make the case that more carries put the back at greater injury risk, thus tending to exert downward pressure on higher carry figures, but then would you really prefer to draft a back who would be getting few carries just to try and dodge injury?

Again, I'm just playing devil's advocate here.

What I think would be an interesting study would be to tally up all the backs who failed to get, let's just say 150 or 200 carries in year n (a relatively light number), and find out how many of those achieved 300 or 350 carries the following year.
No, it's a really good point. If you look at a graph of those with 350 carries and how they do in each of the following 5 years, it looks almost exactly the same as if you look at a graph of 100 carry backs and the average number of carries THEY get in each of the following 5 years.
RB350.JPG


RB150.JPG


(hopefully, my first attempt at linking to images works)

Will have to consider more how to analyze these data to directly address the "wear/tear" question.

FWIW, in answer to your question, only 6 RBs had sub-200 carry seasons and then went on to have 350 carry seasons (plus Curtis Martin whose rookie campaign was 368 in 1995):

Eric Dickerson (60 to 388 in 1987 and 88)

Barry Foster (96 to 390 in 1991 and 92)

Curtis Martin (0 to 368 in 1995)

Christian Okoye (105 to 370 in 1988 and 89)

Gerald Riggs (100 to 353 in 1983 and 84)

John Riggins (177 to 375 in 1982 and 83)

James Wilder (161 to 407 in 1983 and 84)

 
We have this discussion every year. And the question becomes is there that much difference between 350 carries and say 340. How about 330?
While I don't necessarily give any credence to the theory we're talking about here, the fact that it always comes down to this is a bit annoying.You could make the same argument about any study that is made here. There always has to be some cutoff, and it's always going to be some arbitrary number between which being just over and just under it is going to make very little difference. So why is it this study that always gets reamed for it?

How come when someone says "this team is 12-0 when their running back runs for 100 yards or more", we don't get slews of people complaining about how the difference between running for 100 yards vs. running for 99 yards isn't really going to have any affect?
:shrug: Intelligent people complain about that all the time. They also complain about the correlation-causality of what you just described.
 
We have this discussion every year. And the question becomes is there that much difference between 350 carries and say 340. How about 330?
That's really just nitpicking. If there is any kind of relationship, chances are it's some kind of smooth curve, and not a discrete magical figure, but because of the extremely small sample sizes, among other things, it'd be pretty hard to present any meaningful plotted data on a graph, so I don't see any reason why a guy can't pick an arbitrary high watermark and use it as a point of discussion.Pick 340, 345, or 338, if you'd rather.What I think would be a more legitimate gripe, would be the fact that in these type of discussions people tend to use carries rather than touches, and I'm not sure why they'd draw a distinction between carries and catches. It's a minor point, but still a legitimate gripe, I think.I think what I'd be interested in seeing is, (to pick some arbitrary figures) among 100 carry backs that play 15+ games in year n, how many are able to duplicate that in year n+1, and how many fall short of the 15 game mark.This might at least give you some kind of baseline to start off the discussion ---- maybe up the 100 carry....oops, 'touch' figure to 200, or whatever.
 
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.

 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
hmmmm...excellent work.I'd just like to make a couple very minor points on that ---- first of all, the first group actually includes the second group as a subset, so the -33.5 guys are pulling the first group their way a bit, although I wouldn't worry too much about that as I expect the second group sample size to be dwarfed by the first group.

Also, I think the difference in results in the two groups will look even closer if you figure it as a % of carries lost rather than just a gross number, and this isn't simply manipulating results --- I think the higher carry guys will tend to have more noise associated in their totals, so it'd be easier to fluctuate a few carries.

Of course, I can't do this accurately, but if I approximate a 225 fig for the first group and the 350 mark for the second group, it'd come out like:

first group: -28.4/225 = -12.6%

second group: -33.5/350 = - 9.6%

and like I said, that's just a ballpark for illustration.

 
So is the summary basically 1) regression to the mean happens, 2) for a variety of reasons, including injury?
Well, I think that's for each person to make up their own minds about, but one specific thing I can caution you on is that if you want to discuss 'regression to the mean', you first need to define the mean.I'm not accusing YOU of this at all, but a lot of people like to throw statistics, and statistical terms, around in their discussions because they feel it makes their posts look 'smarter' rather than trying to use numbers as an accurate analytical tool. All these football discussions inevitably revolve around VERY small sample sizes filled with a myriad of variables, so the best you can really hope for is some kind of analysis as color for a discussion --- not irrefutable proof.Specifically, I see a lot of this 'regression to the mean' thrown out there, but nobody ever mentions what the mean is. Should I expect Tomlinson to regress to the NFL mean when he has a good year? How about Adrian Peterson?If some guy pops a good number in his third year, and first year starting, do I use his 3 year mean, or compare him to the NFL mean, or use his one year starter's mean?Clearly, some guy logging 400+ carries is on the extreme end of the spectrum, so it'd be natural to expect them to cool off a bit on that, but when these guys log a great many carries there tends to be legitimate driving forces behind that --- these #'s aren't produced randomly.So, you'd have to ask yourself if your projections take these driving forces into account.
 
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
There is statistical bias built into this study. That is, someone that carries 350+ times is likely a very good player and the only reason his carries would drop off drastically is because he got hurt or wasn't as productive the next year. It is unlikely that a guy would carry 350+ times and then be replaced or have his role drastically reduced before he steps out onto the field again.On the flipside, a guy that carries 150 times could easily be a plug-in type guy who came in because someone ahead of him got injured and will be taking his spot back next year, or a guy that just wasn't good enough to get the ball a lot and is replaced in the next offseason.

Aaron Stecker would show up in this study as a guy who's carry dropoff was -107 from last year to this year. But he didn't lose 107 carries because he got worn down more from his smaller load the prior year, he lost 107 carries because he was just a fill-in to begin with. That's not really relevant data to what you're studying, but it's HUGELY skewing that same data.

The guys in group 1 are likely having the largest percentage of their carry dropoff from situations like this. The guys in group 2 are having a large percentage of their carry dropoff from completely different situations. You're not comparing apples to apples. Not even close.

 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
hmmmm...excellent work.I'd just like to make a couple very minor points on that ---- first of all, the first group actually includes the second group as a subset, so the -33.5 guys are pulling the first group their way a bit, although I wouldn't worry too much about that as I expect the second group sample size to be dwarfed by the first group.

Also, I think the difference in results in the two groups will look even closer if you figure it as a % of carries lost rather than just a gross number, and this isn't simply manipulating results --- I think the higher carry guys will tend to have more noise associated in their totals, so it'd be easier to fluctuate a few carries.

Of course, I can't do this accurately, but if I approximate a 225 fig for the first group and the 350 mark for the second group, it'd come out like:

first group: -28.4/225 = -12.6%

second group: -33.5/350 = - 9.6%

and like I said, that's just a ballpark for illustration.
Good call on your first point. The N=52 for Group 2 should not be nearly enough to be much of a drag on the over 1000 backs analyzed here. But, I should have put the Ns up there, anyway. Thanks.I see your point on analyzing the % drop in carries, but that problem was mitigated by the fact that I ran the regression analysis to get an expected change score. It is, in fact, a % reduction across the board that is being calculated (S1*.790). Thus, what we're saying is that, when all RBs are accounted for, you should expect 80% of the carries you got last year (plus 13 as a constant in the regression model). So, in a sense, you could duplicate this, but...I'm not really sure about whether the analyses would hold up on that method.

So, your example illustrates your point well, but it doesn't actually mirror what I did.

Say you have a back who got 350 carries in S1. You have to multiply that by .79 and then add 13. His expected number of carries for S2 is 269. Let's say he gets 290 carries. Before, I was like, "Whoa, this guy tanked--he dropped 60 carries from the previous year!" But, in actuality, he was above what his expected total for S2 would have been. Again, all RBs statistically decline by about 15-20% from the previous year {or ((S1*.79) + 13)}.

Take a guy who got 200 carries in S1. His expected S2 production would be 171 carries the following year. If he rushed for 160 carries, you might say he dropped only 40 from his previous total, whereas the guy who carried it 350 times dropped 60!. But, in actuality, this guy was -9 from his expected value, whereas the guy who carried it 350x was a +21.

I hope that helps illustrate how this method actually does take into account the pct. drop...what I'm looking at is, given the history of all RBs and how many carries they get from one season to the next, we expect a decline...and, it's just a question of whether one group declines more than the other. I don't think there is any evidence that the 350+ carry backs decline any more than any other group.

And, since this keeps coming up every year, perhaps we can put a lid on it now. :mellow:

 
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
hmmmm...excellent work.I'd just like to make a couple very minor points on that ---- first of all, the first group actually includes the second group as a subset, so the -33.5 guys are pulling the first group their way a bit, although I wouldn't worry too much about that as I expect the second group sample size to be dwarfed by the first group.

Also, I think the difference in results in the two groups will look even closer if you figure it as a % of carries lost rather than just a gross number, and this isn't simply manipulating results --- I think the higher carry guys will tend to have more noise associated in their totals, so it'd be easier to fluctuate a few carries.

Of course, I can't do this accurately, but if I approximate a 225 fig for the first group and the 350 mark for the second group, it'd come out like:

first group: -28.4/225 = -12.6%

second group: -33.5/350 = - 9.6%

and like I said, that's just a ballpark for illustration.
A 3% difference for the 350 group would make that 10.5 carries less than the first group on average.I can live with this risk.

 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
There is statistical bias built into this study. That is, someone that carries 350+ times is likely a very good player and the only reason his carries would drop off drastically is because he got hurt or wasn't as productive the next year. It is unlikely that a guy would carry 350+ times and then be replaced or have his role drastically reduced before he steps out onto the field again.On the flipside, a guy that carries 150 times could easily be a plug-in type guy who came in because someone ahead of him got injured and will be taking his spot back next year, or a guy that just wasn't good enough to get the ball a lot and is replaced in the next offseason.

Aaron Stecker would show up in this study as a guy who's carry dropoff was -107 from last year to this year. But he didn't lose 107 carries because he got worn down more from his smaller load the prior year, he lost 107 carries because he was just a fill-in to begin with. That's not really relevant data to what you're studying, but it's HUGELY skewing that same data.

The guys in group 1 are likely having the largest percentage of their carry dropoff from situations like this. The guys in group 2 are having a large percentage of their carry dropoff from completely different situations. You're not comparing apples to apples. Not even close.
It's a fair point, but not one that I'm analyzing. I didn't answer the question whether RBs who carry the ball 350+ times suffer from more wear and tear. What I did, however, was answer the question about whether there is a statistical difference between the 350+ group and the others with respect to the fall-off from one season to the next. One could hypothesize that this group falls off more than others due to wear and tear, but that hypothesis is not accurate according to these data.Now, it is possible that wear and tear occurs and is simply mitigated by the fact that these guys are more talented, tougher, have faster recovery time, etc. than their 150 carry counterparts. If we were to put these guys on some hypothetical teams that gave them something akin to a pitch count in baseball and limited their number of carries to 200...it is possible that they would hold more consistently at the 200 carry mark each successive season. But, that would require an experimental condition that we simply do not have the ability to control.

What I am saying with these data is: (a) You can expect a drop-off from the 350+ carry club on the order of 15-20% and (b) this is no more or less than the expected drop-off from any other RB who carried it 100x last season, or 200x, or 250x.

My original hypothesis, just looking at the number of consecutive 350+ carry seasons, was that it suggested there was something going on with this group to see such a dramatic decline in the number of carries the following year (there were so few instances). But, as Koolaid correctly pointed out (or, at least suggested), this may be a phenomenon that occurs with all RBs.

Indeed, he is correct.

 
There is statistical bias built into this study. That is, someone that carries 350+ times is likely a very good player and the only reason his carries would drop off drastically is because he got hurt or wasn't as productive the next year. It is unlikely that a guy would carry 350+ times and then be replaced or have his role drastically reduced before he steps out onto the field again.On the flipside, a guy that carries 150 times could easily be a plug-in type guy who came in because someone ahead of him got injured and will be taking his spot back next year, or a guy that just wasn't good enough to get the ball a lot and is replaced in the next offseason.Aaron Stecker would show up in this study as a guy who's carry dropoff was -107 from last year to this year. But he didn't lose 107 carries because he got worn down more from his smaller load the prior year, he lost 107 carries because he was just a fill-in to begin with. That's not really relevant data to what you're studying, but it's HUGELY skewing that same data.The guys in group 1 are likely having the largest percentage of their carry dropoff from situations like this. The guys in group 2 are having a large percentage of their carry dropoff from completely different situations. You're not comparing apples to apples. Not even close.
That's a fair enough assessment, and the reason why I made an effort to set a carry floor and ask for 15 gamers in an earlier post, in order to try to include just legit starters, or significant contribution rbbc guys in the discussion.What parameters you choose in order to do this is pretty much open to discussion, although I suppose you could just post #'s at each 50 touch signpost, if you felt like doing the work, and see how they line up.No matter what kind of #'s you pick out to look at, there will always be a 'what if' just by the very nature of football, so it's never irrefutable proof, as I mentioned above, but it's interesting as illustration of a point.
 
So is the summary basically 1) regression to the mean happens, 2) for a variety of reasons, including injury?
Well, I think that's for each person to make up their own minds about, but one specific thing I can caution you on is that if you want to discuss 'regression to the mean', you first need to define the mean.I'm not accusing YOU of this at all, but a lot of people like to throw statistics, and statistical terms, around in their discussions because they feel it makes their posts look 'smarter' rather than trying to use numbers as an accurate analytical tool. All these football discussions inevitably revolve around VERY small sample sizes filled with a myriad of variables, so the best you can really hope for is some kind of analysis as color for a discussion --- not irrefutable proof.Specifically, I see a lot of this 'regression to the mean' thrown out there, but nobody ever mentions what the mean is. Should I expect Tomlinson to regress to the NFL mean when he has a good year? How about Adrian Peterson?If some guy pops a good number in his third year, and first year starting, do I use his 3 year mean, or compare him to the NFL mean, or use his one year starter's mean?Clearly, some guy logging 400+ carries is on the extreme end of the spectrum, so it'd be natural to expect them to cool off a bit on that, but when these guys log a great many carries there tends to be legitimate driving forces behind that --- these #'s aren't produced randomly.So, you'd have to ask yourself if your projections take these driving forces into account.
Kool-Aid. All good points. I'm operating under the assumption that most RBs don't carry 350 times in a season. In other words, the mean is much lower than 350 carries.Does that imply AP suddenly won't get 350+ carries each year? Beats me. What it probably DOES imply is that if we took all RBs who had 350+ carries last year, the odds are that a majority of them will not have >350 carries this year.BTW, I don't throw out statistical terms to "sound smarter". It's just the way I think. :mellow:
 
It's a fair point, but not one that I'm analyzing. I didn't answer the question whether RBs who carry the ball 350+ times suffer from more wear and tear. What I did, however, was answer the question about whether there is a statistical difference between the 350+ group and the others with respect to the fall-off from one season to the next.
Well, to be fair you did say it, specifically. And I quote:"I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1."And then you said it again a couple lines after saying you never said it. :goodposting:
One could hypothesize that this group falls off more than others due to wear and tear, but that hypothesis is not accurate according to these data.
Not really. The best you can do here is "that hypothesis is not proven correct by this data". To say that that hypothesis is not accurate due to that data is completely untrue, since that's not what that data tests, in your own words.
What I am saying with these data is: (a) You can expect a drop-off from the 350+ carry club on the order of 15-20% and (b) this is no more or less than the expected drop-off from any other RB who carried it 100x last season, or 200x, or 250x. My original hypothesis, just looking at the number of consecutive 350+ carry seasons, was that it suggested there was something going on with this group to see such a dramatic decline in the number of carries the following year (there were so few instances). But, as Koolaid correctly pointed out (or, at least suggested), this may be a phenomenon that occurs with all RBs.
True, but the thing is that many of those players from group 1 that I mentioned that fell off for other reasons (the team drafting a replacement, the replacement coming back, etc) are easy to see coming. Dominic Rhodes changed teams and is now the #3 RB on his team, so when his carries drop off by 100+ next year it's not going to be at the expense of the people who drafted him too highly, because his value has already been adjusted for that.Unfortunately, there is really no way to do a study on high carry RBs vs. low carry RBs because the high carry RBs will always have the built-in statistical advantage of being, as a whole, better players that are much less at risk of losing their jobs because someone else on the roster is a better player than them. I can't think of a single 350+ carry RB that was replaced or had his role drastically reduced before even stepping out onto the field for year N+1.If it were possible to do, I would imagine that if you ran that same study using only players that were given a chance in year N+1 to reprise the same role in the offense they had in year N, that the numbers would look completely different. Unfortunately, it's not really possible to find a sample size with that much detail.Basically (not totally, but the talent differentiation makes it play out like this), what this study boils down to is:Players that had 350+ carries in Year N who's carries dropped off in year N+1 due to:-injury-much worse productionPlayers that had 150+ carries in Year N who's carries dropped off in year N+1 due to:-injury-much worse production-replaced by a rookie RB-replaced by a free agent RB-traded to or signed by new team-entered year N+1 as a backup-moved to a RBBCetcThere reasons for the dropoff in group 2 players that don't really apply to group 1 players. The thing about those additional reasons is that most are predictable before the season. If a team drafts a rookie RB in round 1 to replace their 158 carry mediocre tailback, than that 158 carry mediocre tailback's fantasy stock drops prior to you ever drafting him in year N+1.
 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
I would ask that you consider Matt Walden's research on RB's that carry for X plus times in a season. He has cataloged with great diligence a direct correlation between players at high carry rates and decreased performance the following year. You may also want to check out his RSP for great analysis of skill position rookies and their potential to translate college talent to the NFL. The case has been made by Matt and the evidence is indisputable, backs with an exorbitant amount of carries including receptions will generally have a market decline in performance the following year. There are always exceptions !!
 
It's a fair point, but not one that I'm analyzing. I didn't answer the question whether RBs who carry the ball 350+ times suffer from more wear and tear. What I did, however, was answer the question about whether there is a statistical difference between the 350+ group and the others with respect to the fall-off from one season to the next.
Well, to be fair you did say it, specifically. And I quote:"I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1."

And then you said it again a couple lines after saying you never said it. ;)

One could hypothesize that this group falls off more than others due to wear and tear, but that hypothesis is not accurate according to these data.
Not really. The best you can do here is "that hypothesis is not proven correct by this data". To say that that hypothesis is not accurate due to that data is completely untrue, since that's not what that data tests, in your own words.
What I am saying with these data is: (a) You can expect a drop-off from the 350+ carry club on the order of 15-20% and (b) this is no more or less than the expected drop-off from any other RB who carried it 100x last season, or 200x, or 250x.

My original hypothesis, just looking at the number of consecutive 350+ carry seasons, was that it suggested there was something going on with this group to see such a dramatic decline in the number of carries the following year (there were so few instances). But, as Koolaid correctly pointed out (or, at least suggested), this may be a phenomenon that occurs with all RBs.
True, but the thing is that many of those players from group 1 that I mentioned that fell off for other reasons (the team drafting a replacement, the replacement coming back, etc) are easy to see coming. Dominic Rhodes changed teams and is now the #3 RB on his team, so when his carries drop off by 100+ next year it's not going to be at the expense of the people who drafted him too highly, because his value has already been adjusted for that.Unfortunately, there is really no way to do a study on high carry RBs vs. low carry RBs because the high carry RBs will always have the built-in statistical advantage of being, as a whole, better players that are much less at risk of losing their jobs because someone else on the roster is a better player than them. I can't think of a single 350+ carry RB that was replaced or had his role drastically reduced before even stepping out onto the field for year N+1.

If it were possible to do, I would imagine that if you ran that same study using only players that were given a chance in year N+1 to reprise the same role in the offense they had in year N, that the numbers would look completely different. Unfortunately, it's not really possible to find a sample size with that much detail.
You might see it as semantics, but the verbiage matters here. There is a difference between asking (a) whether RB350s suffer wear and tear and (b) whether they suffer the effects of wear and tear. I answered the second question, not the first.As for your last point, I'll let you decide whether or not you can analyze something like that (sounds like you don't think so). I am answering a different question...can you expect a larger drop-off from RB350s compared to other RBs? Some would suggest that RB350s have burned more tread off the tires and, as such, drop off in their performance the following year. At this moment, I am comfortable saying that is not the case. Not anymore so than any other RB150 or RB250 or what-have-you.

I am, however, holding out for the possibility that earlier seasons of 350 carries hold up better than later seasons of 350. But, that's a complicated statistical analysis that I likely won't have time to conduct right now.

 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
I would ask that you consider Matt Walden's research on RB's that carry for X plus times in a season. He has cataloged with great diligence a direct correlation between players at high carry rates and decreased performance the following year. You may also want to check out his RSP for great analysis of skill position rookies and their potential to translate college talent to the NFL. The case has been made by Matt and the evidence is indisputable, backs with an exorbitant amount of carries including receptions will generally have a market decline in performance the following year. There are always exceptions !!
Well, I don't think people should take too kindly to anyone who suggests that their data are indisputable. The data are always disputable, but it's simply a matter of whose methodology and approach more directly answers the questions. I'm doing this on the fly tonight, so while I feel confident in interpreting these data, there is definitely room for discussion and revision here (I've already reversed my position once today; I might do it again if I find convincing evidence to do that).I don't know who Matt Walden is, and google was no help. Please link to the indisputable evidence. TIA.

 
Players that had 350+ carries in Year N who's carries dropped off in year N+1 due to:-injury-much worse productionPlayers that had 150+ carries in Year N who's carries dropped off in year N+1 due to:-injury-much worse production-replaced by a rookie RB-replaced by a free agent RB-traded to or signed by new team-entered year N+1 as a backup-moved to a RBBCetcThere reasons for the dropoff in group 2 players that don't really apply to group 1 players. The thing about those additional reasons is that most are predictable before the season. If a team drafts a rookie RB in round 1 to replace their 158 carry mediocre tailback, than that 158 carry mediocre tailback's fantasy stock drops prior to you ever drafting him in year N+1.
RB350 compared to RB280 (280 rushes to 320 rushes) was also nonsignificant. I don't think the argument could be made that they were lameduck RBs as much as the RB150-299 group.
 
Here we go, another year, another rehashing of a thoroughly debunked arbitrary # of 'high' carries argument.

Where's the thread about the correlation of 200 carry backs who are just as likely to have <300 carries in year N+1? ;)

Aren't we all smart enough to realize very few RBs get insanely high carry totals every single year?

 
Here we go, another year, another rehashing of a thoroughly debunked arbitrary # of 'high' carries argument.Where's the thread about the correlation of 200 carry backs who are just as likely to have <300 carries in year N+1? :goodposting: Aren't we all smart enough to realize very few RBs get insanely high carry totals every single year?
If it has been thoroughly debunked already, you'd think that a link would be readily available on this topic. A previous poster seems to think it's indisputable that there is an effect. ;)
 
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
I would ask that you consider Matt Walden's research on RB's that carry for X plus times in a season. He has cataloged with great diligence a direct correlation between players at high carry rates and decreased performance the following year. You may also want to check out his RSP for great analysis of skill position rookies and their potential to translate college talent to the NFL. The case has been made by Matt and the evidence is indisputable, backs with an exorbitant amount of carries including receptions will generally have a market decline in performance the following year. There are always exceptions !!
Oh, you mean Matt Waldman. Here's, perhaps, the article of indisputable evidence...Waldman Article

"In 2006 and 2007 I worked with Tony San Nicholas to provide one of the more compelling fantasy football studies of the RB position available anywhere." :sadbanana:

May I respectfully submit that this article does nothing to address the statistical decline seen across all groups of RBs over time. He concluded, as I did earlier, that because the ratio of high-carry RBs seems a bit high, it must be so. But, I don't see anything in this article or elsewhere where he describes this group in comparison to any other group of RBs, which was my error earlier when I began with this post. I think this has to be addressed before we can conclude that they see a disproportionate reduction in carries in year N+1.

 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
I would ask that you consider Matt Walden's research on RB's that carry for X plus times in a season. He has cataloged with great diligence a direct correlation between players at high carry rates and decreased performance the following year. You may also want to check out his RSP for great analysis of skill position rookies and their potential to translate college talent to the NFL. The case has been made by Matt and the evidence is indisputable, backs with an exorbitant amount of carries including receptions will generally have a market decline in performance the following year. There are always exceptions !!
Oh, you mean Matt Waldman. Here's, perhaps, the article of indisputable evidence...Waldman Article

"In 2006 and 2007 I worked with Tony San Nicholas to provide one of the more compelling fantasy football studies of the RB position available anywhere." :lmao:

May I respectfully submit that this article does nothing to address the statistical decline seen across all groups of RBs over time. He concluded, as I did earlier, that because the ratio of high-carry RBs seems a bit high, it must be so. But, I don't see anything in this article or elsewhere where he describes this group in comparison to any other group of RBs, which was my error earlier when I began with this post. I think this has to be addressed before we can conclude that they see a disproportionate reduction in carries in year N+1.
Waldman - Thanks I hope he doesn't read this thread!! Not being argumentative with your premise just suggesting that he has done a great deal of research on the topic and put forth a convincing argument. Food for thought. I look forward to reading your analysis as well, I am always interested in good research. I hope to not butcher your name if I reference your work. :lmao: A little less beer before I post again!!
 
Last edited by a moderator:
Let's take Waldman's premise that 370 RB are more prone to losing carries the following year. I certainly don't have injury data from every RB ever in existence, but let's just assume that, Waldman's correct here. Then, it would follow that the change between years N and N+1 would be markedly different for the RB370 group (RB1) than, say, those who had 280-320 rushes (RB2).

When looking at the difference in change scores (which were now recalculated to include a regression on all RBs over 280 carries), here are the averages:

RB1 (N=52) = -5.6 carries (370+ carry guys)

RB2 (N=219) = +1.4 carries (280-320 carry guys)

Independent samples t-tests determine that this difference is not significant (t = .431, p = .667).

So, I still think this effect of wear/tear stuff is a bit overstated.

 
Last edited by a moderator:
I think I can establish that there is not a "wear/tear" effect on RBs who carry 350+ times in a season on season N+1. (Interesting how you set out to establish one point, and you wind up demonstrating the exact opposite)...

One phenomenon pointed out by someone earlier is that ALL RBs, no matter what group you analyze, tend to regress downward with each respective season. Clearly, this doesn't happen in all cases on an individual level. But, if you look at the trend, no matter what group they're from, the pattern is to drop downward. The question is, does this happen more robustly for the 350+ club...that the wear-and-tear grinds them down?

1. I selected all RBs in history (thanks to profootballreference.com) who carried 150+ times in a given season

2. I compared that 150+ carry season to season N+1 by conducting a regression analysis (essentially, of all RBs, what would we predict year 2 carry totals to be in y=mx+b format).

3. You can take any value of Season 1 (S1) carries and calculate the predicted number of carries in Season 2 (pS2) using "change scores" (this is standard practice in medicine, for example when they are looking for an outcome of a recommended treatment). pS2 carries are calculated by the following equation: pS2 = {((S1)*.790)+13.5}. Again, this was derived from the earlier regression analysis.

4. All pS2 carries are subtracted from Actual Season 2 (S2) carries (S2 - pS2) so that each RB gets a value +/- what his expected carry load would have been.

5. Derive 2 groups of RBs: Group 1 = 150-250 carries in Season 1; Group 2 = 350+ carries in Season 1.

How did their respective S2 carries compare to expected pS2 carries?

Mean change score for Group 1 (N=932) was = -28.4 (about 28 carries below predicted)

Mean change score for Group 2 (N=52) was = -33.5

To test whether this difference was significant between the two groups, I ran a simple independent samples t-test, which was not significant (t = .42, p = .67). In other words, I could not reject the null hypothesis that said there was no difference between the two groups. The groups are statistically similar. I tried several variations on this theme and still could not derive a significant difference between any two groups.

To my knowledge, this is the most direct test as to whether RBs who grind it out one season vanish beyond expected levels the next. They don't. They will typically have less carries than they did the previous season, but not by any amount that's over-and-above what would be expected.
I would ask that you consider Matt Walden's research on RB's that carry for X plus times in a season. He has cataloged with great diligence a direct correlation between players at high carry rates and decreased performance the following year. You may also want to check out his RSP for great analysis of skill position rookies and their potential to translate college talent to the NFL. The case has been made by Matt and the evidence is indisputable, backs with an exorbitant amount of carries including receptions will generally have a market decline in performance the following year. There are always exceptions !!
Oh, you mean Matt Waldman. Here's, perhaps, the article of indisputable evidence...Waldman Article

"In 2006 and 2007 I worked with Tony San Nicholas to provide one of the more compelling fantasy football studies of the RB position available anywhere." :lmao:

May I respectfully submit that this article does nothing to address the statistical decline seen across all groups of RBs over time. He concluded, as I did earlier, that because the ratio of high-carry RBs seems a bit high, it must be so. But, I don't see anything in this article or elsewhere where he describes this group in comparison to any other group of RBs, which was my error earlier when I began with this post. I think this has to be addressed before we can conclude that they see a disproportionate reduction in carries in year N+1.
Waldman - Thanks I hope he doesn't read this thread!! Not being argumentative with your premise just suggesting that he has done a great deal of research on the topic and put forth a convincing argument. Food for thought. I look forward to reading your analysis as well, I am always interested in good research. I hope to not butcher your name if I reference your work. :lmao: A little less beer before I post again!!
I hope he does read this thread so that he can, perhaps, lend a little more insight into his analyses. Maybe he addressed some of these concerns that aren't explicitly stated in his article (or that I glossed over while reading). But, this sort of work review is essential for providing clarity to any issue. My methodology should be put up to the same level of scrutiny as his (and has been in this thread, already).But, the main point I think he misses that I've tried to address here is that any group of RB you look at declines over time. I'd like to know if he still feels that the RB370 group drops at a steeper rate in the following year compared to any other group. And, if so, what statistical measure(s) did he use to address that question.

 
Here's my take on one angle of the issue.
I took your point about separating the data such that I used pre-1995 RBs to formulate the regression comparing Year N and Year N+1 for all RBs over 280 carries (N = 122).I then selected all RBs from 1995 to 2007 and split them into two groups: RB370+ ("RB1") and RB280-320 ("RB2"). These groups were compared to see what differences existed between Year N and Year N+1 which showed the following:

RB1 (N=29) = 371.97 (Y1) versus 269.03 (Y2)...a 27% decrease in carries between the two seasons

RB2 (N=120) = 314.4 (Y1) versus 243.03 (Y2)...a 23% decrease in carries

Expected change scores for all 1995-2007 RBs (derived from the pre-1995 data/regression analysis) are as follows:

RB1 = -1.52

RB2 = +5.0

The difference between the relative reduction on these two groups is insignificant (t=-.275, p=.78).

I like the sharper methodology here, but still...same result. You can't expect any more significant drop in the 370 group than the group who had between 280-320 carries.

Draft your workhorse with confidence. I think rest is all myth. :unsure:

 
Last edited by a moderator:
Yeah, and I would just like to add a little something --- as much as I've derided the phrase 'regression to the mean', there IS an upper boundary to a spectrum of touches, and 400+ is right up there. You can invent any mean you want, but 400+ is clearly at the very top end of a statistically small sample size, which means a couple things:

You WILL tend to have somewhat more downward pressure on touches over the course of the season, mostly because any peak number is usually aided by a variety of noise from other variables ---- this doesn't have to be a significant number, but I think the tendency to lose 5-10 carries for the 400 guys would be a little stronger than the 250 guys, or if you want to look at it another way, the % of 250 carry guys who add substantial carries to help prop up the averages should be much higher than the % of 400 carry guys, simply because the 400 carry guys are that much closer to the practical ceiling.

Also, because it's a small sample, all it takes is for one of those 400 touch guys to get knocked out for most of the following season and your resultant average carries can be visibly affected.

What I'm saying is that one 400 carry guy in a sample of 30-50 is going to carry far more weight than one 250 carry guy in a sample of 100-200.

So, I think Cobalt's numerical results are pretty much in line with what I'd expect, and I'd agree with the conclusions he draws from them.

 
The 370-carry threshold is statistical bunk, and what's more, the guys who came up with that threshold know it's statistical bunk, if they know anything about statistics. They chose the 370-carry threshold because it was the number they needed to choose to make their hypothesis work, and they discount all the obvious objections to their methods by pointing back to their curve-fit numbers with minuscule sample sizes.

The fact is, no matter what exceptional statistic you look at, there will be some number you can choose as a "point of no return," as they put it; when you look at the historical record, you can find a number of carries, or touches, or fantasy points, or yardage, above which historical RBs have performed poorly in year N+1. It's a statistical certainty. What is not certain--in fact, what is complete bunk--is the idea that because someone passed the historical threshold, he will perform poorly next year.

Put it this way: The #1 fantasy RB in year N is very rarely the #1 fantasy RB in year N+1. Does that mean you're wrong to project him as #1 in year N+1? No, because all else being equal, he is still the most likely RB to be #1 overall. Even though historically #1 RBs have performed poorly in year N+1, they've still performed better than the #2 RB, the #3 RB, and so on.

It's bunk. Look at the individuals and their situations.

 
Regression to the mean is a very real effect, but that stands for everyone at every level of workload. A few years ago I did a detailed analysis of the statistical achievements of backs at different carry intervals and, truth be told, the evidence just wasn't convincing as a tool to downgrade someone BECAUSE they had a heavy workload the year before. You can assume someone with 350+ carries is going to see fewer touches and that doesn't mean you should assume they won't be an impact fantasy player.

The same kind of analysis can work in the other direction. A lot of people are discounting what DeAngelo Williams did last year, suggesting that he can't average 5.5+ YPC again and/or match his 20 TDs on so few carries. Yet, I don't see many people accepting that he could easily increase his touches. Or, more importantly, a quick look at the comparable historical examplars are ALL elite backs, and were top 10 fantasy backs more times in their careers than not.

Or take some of the statistical mean regressions we don't do. Matt Forte is still be counted on to catch 60/70/80 passes this year. Yet, if you take a look at 2nd year RBs who had at least 60 catches as rookies, they, on average, show a 23% decline in receptions. Yet no many folks expect Forte to catch 40-45 receptions, do they?

 
The 370-carry threshold is statistical bunk, and what's more, the guys who came up with that threshold know it's statistical bunk, if they know anything about statistics. They chose the 370-carry threshold because it was the number they needed to choose to make their hypothesis work, and they discount all the obvious objections to their methods by pointing back to their curve-fit numbers with minuscule sample sizes.

The fact is, no matter what exceptional statistic you look at, there will be some number you can choose as a "point of no return," as they put it; when you look at the historical record, you can find a number of carries, or touches, or fantasy points, or yardage, above which historical RBs have performed poorly in year N+1. It's a statistical certainty. What is not certain--in fact, what is complete bunk--is the idea that because someone passed the historical threshold, he will perform poorly next year.

Put it this way: The #1 fantasy RB in year N is very rarely the #1 fantasy RB in year N+1. Does that mean you're wrong to project him as #1 in year N+1? No, because all else being equal, he is still the most likely RB to be #1 overall. Even though historically #1 RBs have performed poorly in year N+1, they've still performed better than the #2 RB, the #3 RB, and so on.

It's bunk. Look at the individuals and their situations.
The thing is, the "Curse of 370" behaves very much like the "Curse of 300," which behaves very much like the "Curse of 250," which behaves very much like the "Curse of 100," and so on. I don't care what number of carries you start with (e.g., 100, 192, 258, 370, etc.), you will see a precipitous fall starting in Year N+1. It's just the reality. Now, the explanation for that variance may change from group to group. Perhaps the 370 guys do wear down faster than the 170 guys. But, in terms of draft strategy, it's irrelevant because you see the same pattern with every group in-between. So, while I don't think the authors of the "Curse of 370" intend to mislead or know that they are making suspect inferences from the data they have analyzed, I do think it's important to point out that all groups of RBs except rookies show a decline from the previous season when looking solely at the number of carries they had in a season. Thus, I think The Curse is really The Myth.

 
Thats interesting Jason. I hadn't thought to look at the catches that way. I have been thinking Forte may see a slight uptick in receptions due to lack of other options. Others have told me they think Cutler will throw a lot more to the WR. I don't know. But interesting odds you laid out there.

I guess LT was one exception:

2001 22 SDG RB 16 16 339 1236 10 54 3.6 77.3 21.2 59 367 6.2 0 27 3.7 22.9 1603 10 8 2002* 23 SDG RB 16 16 372 1683 14 76 4.5 105.2 23.3 79 489 6.2 1 30 4.9 30.6 2172 15 3 2003 24 SDG RB 16 16 313 1645 13 73 5.3 102.8 19.6 100 725 7.3 4 73 6.3 45.3 2370 17 2 2004*+ 25 SDG RB 15 15 339 1335 17 42 3.9 89.0 22.6 53 441 8.3 1 74 3.5 29.4 1776 18 6 2005* 26 SDG RB 16 16 339 1462 18 62 4.3 91.4 21.2 51 370 7.3 2 41 3.2 23.1 1832 20 3 2006*+ 27 SDG RB 16 16 348 1815 28 85 5.2 113.4 21.8 56 508 9.1 3 51 3.5 31.8 2323 31 2 2007*+ 28 SDG RB 16 16 315 1474 15 49 4.7 92.1 19.7 60 475 7.9 3 36 3.8 29.7 1949 18 0 2008 29 SDG RB 16 16 292 1110 11 45 3.8 69.4 18.3 52 426 8.2 1 32 3.3 26.6 1536 12 1Or maybe he didn't make the cut off because he only had 59 catches his rookie year?

 
Here's my take on one angle of the issue.
Personally, I am drafting Larry Johnson third this year. (2007)
My condolences.heh....that was a good article, though.

I've gotta get to bed, but a minor point I'd note, and just have noted somewhere in another recent post, is that I'm a little suspicious that the top shelf guys might be at a slight disadvantage in your article simply by already being at the top of the spectrum. Specifically, if you look at the 30+ second tier list, you'll see that the top half dozen n+1 vbd appear in the 25 upper echelon n year guys. So, when you average out the second tier vbd it gets a big pimp from a few guys that were able to increase their carries in n+1.

Conversely, the upper tier guys would have to work pretty hard against probability to produce similar improvement on their 400 carry n years, just by virtue of their current position at the top of the spectrum. While equivalent improvement is unlikely, I suppose you could make the case that a bunch more of them could simply hold current n form, but then you run into the problem that for a few of those guys that would then mean a statistically unusual peak performance in 3 consecutive years at a high number of carries --- year n-1 on the second tier list, moving up to the upper tier list in year n, then holding that form in n+1, and really, what are the chances a guy can get away with 3 consecutive 350 carry seasons in the nfl?

 
Here's my take on one angle of the issue.
Personally, I am drafting Larry Johnson third this year. (2007)
My condolences.heh....that was a good article, though.

I've gotta get to bed, but a minor point I'd note, and just have noted somewhere in another recent post, is that I'm a little suspicious that the top shelf guys might be at a slight disadvantage in your article simply by already being at the top of the spectrum. Specifically, if you look at the 30+ second tier list, you'll see that the top half dozen n+1 vbd appear in the 25 upper echelon n year guys. So, when you average out the second tier vbd it gets a big pimp from a few guys that were able to increase their carries in n+1.

Conversely, the upper tier guys would have to work pretty hard against probability to produce similar improvement on their 400 carry n years, just by virtue of their current position at the top of the spectrum. While equivalent improvement is unlikely, I suppose you could make the case that a bunch more of them could simply hold current n form, but then you run into the problem that for a few of those guys that would then mean a statistically unusual peak performance in 3 consecutive years at a high number of carries --- year n-1 on the second tier list, moving up to the upper tier list in year n, then holding that form in n+1, and really, what are the chances a guy can get away with 3 consecutive 350 carry seasons in the nfl?
And, running with the theme of this thread...what about the VBD stability (or, likely, change) in the second tier guys? And, the third tier guys? And so on? I absolutely suspect a similar drop-off.
 
I agree with Maurile's assessment of the Curse of 370. Yet I still happen to think that it is more likely than not that a running back's recent work rate matters in increasing serious injury rates, and it is not merely regression to the mean.

My previous research on the issue is here:

http://www.pro-football-reference.com/blog/?p=328

http://www.pro-football-reference.com/blog/?p=330

http://www.pro-football-reference.com/blog/?p=483

The first one looks at injury rates for backs through first six weeks of season, over the remainder of the season, for the years 1995-2006. The second looks at injury rates for backs over the final six weeks of of the season and playoffs, at the start of the next season, for 1995-2005. The third looks at injury report rates for each injury classification, sorted by workload, for the 2007 season.

So, three independent data sets looked at. In each, the backs with the most carries in the short term had the highest serious injury rates immediately thereafter. Oh, I should say four data sets. The second post also mentioned playoffs back to 1978, and there the serious injury rates were higher at the start of the next season compared to average starter injury rates at the start of the next season.

Pro Football Reference now has game by game data available back to 1960 (when I did the original research, it only went back to 1995, which is why that year was the starting point), so I hope to re-examine it with more independent data sets to see if the pattern continues or not.

 
I agree with Maurile's assessment of the Curse of 370. Yet I still happen to think that it is more likely than not that a running back's recent work rate matters in increasing serious injury rates, and it is not merely regression to the mean.

My previous research on the issue is here:

http://www.pro-football-reference.com/blog/?p=328

http://www.pro-football-reference.com/blog/?p=330

http://www.pro-football-reference.com/blog/?p=483

The first one looks at injury rates for backs through first six weeks of season, over the remainder of the season, for the years 1995-2006. The second looks at injury rates for backs over the final six weeks of of the season and playoffs, at the start of the next season, for 1995-2005. The third looks at injury report rates for each injury classification, sorted by workload, for the 2007 season.

So, three independent data sets looked at. In each, the backs with the most carries in the short term had the highest serious injury rates immediately thereafter. Oh, I should say four data sets. The second post also mentioned playoffs back to 1978, and there the serious injury rates were higher at the start of the next season compared to average starter injury rates at the start of the next season.

Pro Football Reference now has game by game data available back to 1960 (when I did the original research, it only went back to 1995, which is why that year was the starting point), so I hope to re-examine it with more independent data sets to see if the pattern continues or not.
In any of these investigations, did you make statistical comparisons between groups of Hi and Low workload groups and their rate of injuries (e.g., t-tests, ANOVA, regression)? If so, I'm not seeing it.Of course, it's way late, and I could very well be short on the attention span.

 
the serious injury rates . . .
It looks like you're measuring injury rate in terms of something like injuries per month; I think the better measure would be in terms of something like injuries per 100 touches.It should go without saying that a guy who plays in games is more likely to get injured than someone who just sits on the sidelines. That doesn't mean that a guy who just sits on the sidelines has more fantasy value.

Similarly, a guy who gets 25 touches a game will be more likely to get injured in a given set of four games than a guy who gets 15 touches a game. But is he more likely to get injured in a given set of 100 touches? I think that's a more apples to apples comparison.

My theory for why guys who get lots of touches in Year N have an above-average likelihood of getting injured in Year N+1 is simply that touches per game in Year N are positively correlated with touches per game in Year N+1, and touches per game in Year N+1 are positively correlated with injuries per month in Year N+1.

But that doesn't mean guys who got a lot of touches last year should be avoided. If two guys each have a 5% chance of sustaining a serious injury in any given series of 100 touches, and one guy gets 25 touches per game while the other gets 15 touches per game, the first guy will be more likely to get injured during October than the second guy. But the first guy is still quite a bit more valuable than the second guy (assuming similar fantasy points per touch).

 
I agree with Maurile's assessment of the Curse of 370. Yet I still happen to think that it is more likely than not that a running back's recent work rate matters in increasing serious injury rates, and it is not merely regression to the mean.

My previous research on the issue is here:

http://www.pro-football-reference.com/blog/?p=328

http://www.pro-football-reference.com/blog/?p=330

http://www.pro-football-reference.com/blog/?p=483

The first one looks at injury rates for backs through first six weeks of season, over the remainder of the season, for the years 1995-2006. The second looks at injury rates for backs over the final six weeks of of the season and playoffs, at the start of the next season, for 1995-2005. The third looks at injury report rates for each injury classification, sorted by workload, for the 2007 season.

So, three independent data sets looked at. In each, the backs with the most carries in the short term had the highest serious injury rates immediately thereafter. Oh, I should say four data sets. The second post also mentioned playoffs back to 1978, and there the serious injury rates were higher at the start of the next season compared to average starter injury rates at the start of the next season.

Pro Football Reference now has game by game data available back to 1960 (when I did the original research, it only went back to 1995, which is why that year was the starting point), so I hope to re-examine it with more independent data sets to see if the pattern continues or not.
The concern I have with anecdotal stories about players who had big workloads and the...*gasp*...broke down is that, without any sort of analysis to compare the groups, you're just sort of stuck doing an eyeball test without fixating on anything to compare your test to. You've given a neat and detailed account of guys with large workloads who fell off the cliff a year later (or, in some cases, on the injury sheet a week later). But, there is no accounting for the regularity with which guys break down, regardless of workload. Football's a tough sport. Guys with light workloads get stuff torn, broken, tweaked, strained, etc. So, without any test to compare these groups against each other, I don't think there's much you can say.I take particular issue with stuff like this:

And yes, two of those players suffered immediate leg injuries (as did others not on my list), and another broke a foot in game 8. But what was even more noticeable about these players was the loss of effectiveness even before any injury appeared–I was looking at projecting injuries, and predicted a group of five players that were even worse from a performance stand point. For contrast, all other running backs in 2007 regular season and playoffs combined for 50,580 yards on 12,069 attempts, for an average of 4.19 yards per attempt. Only Jackson finished near the mean. Of the sixty-six running backs who had at least 60 rushing attempts in 2007, Rudi Johnson was dead last in terms of yards per carry, and LJ and Shaun Alexander joined him in the bottom 10. This is not simply regression to the mean.
What you didn't say was that Warrick Dunn was on that list. As was Cedric Benson. As was Adrian Peterson (CHI). And Reuben Droughns. And DeShaun Foster. Julius Jones. Kolby Smith. These guys were all amongst those with 60+ carries and below 3.7 YPC (and all in the bottom 10). What I believe you were trying to say here was that the immense workload that LJ, SA, and RJ carried the year before translated into their poor performance in 2007. So, how is it you explain Cedric Benson who had a (light) 141 carries the year before at 4.1 ypc? How do you explain Warrick Dunn's 2006 (286, 4.0)? Adrian Peterson carried the ball only 10 times in 2006. Droughns had 220 carries in 2006 (3.4). DeShaun Foster had 227 carries at 4.0. Thomas Jones 296/4.0. Julius Jones 267/4.1. And, while Kolby Smith didn't play in 2006, he carried the ball only 114 times in 2007, so I don't know how workload would factor into his 3.6 YPC that year.I think what you need is a comparison group and to run some analyses comparing these guys. I would predict that there is no correlation between workload and injury (or performance).

 
Last edited by a moderator:
Here we go, another year, another rehashing of a thoroughly debunked arbitrary # of 'high' carries argument.

Where's the thread about the correlation of 200 carry backs who are just as likely to have <300 carries in year N+1? :hifive:

Aren't we all smart enough to realize very few RBs get insanely high carry totals every single year?
If it has been thoroughly debunked already, you'd think that a link would be readily available on this topic. A previous poster seems to think it's indisputable that there is an effect. :lmao:
Myth of 370It's been thoroughly debunked by those that understand statistics. Those that say it has not been debunked do not understand statistics.

 
Last edited by a moderator:
Here we go, another year, another rehashing of a thoroughly debunked arbitrary # of 'high' carries argument.

Where's the thread about the correlation of 200 carry backs who are just as likely to have <300 carries in year N+1? :hifive:

Aren't we all smart enough to realize very few RBs get insanely high carry totals every single year?
If it has been thoroughly debunked already, you'd think that a link would be readily available on this topic. A previous poster seems to think it's indisputable that there is an effect. :lmao:
Myth of 370It's been thoroughly debunked by those that understand statistics. Those that say it has not been debunked do not understand statistics.
Those that claim that Football Outsiders does not understand statistics does not understand what he is saying. There's a point where you can disagree with someone or a hypothesis and have a healthy debate around the premise. Perhaps, poke some fun. But, I don't see anything here that suggests the theory is totally debunked, nor do I do I see anything here to suggest that the FOs don't understand statistics. You're on my side of this debate, but I'm standing at a distance from the other inferences you draw here.
 

Users who are viewing this thread

Back
Top