What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

Do Past Injuries Predict Future Injuries? (1 Viewer)

Bamac

Footballguy
Conclusion: compared to WRs who missed no games in year n, WRs who missed at least one game in year n missed, on average, 0.67 more games in year n+2. Additional games missed in year n did not

Methodology: using pro-football-reference for all stats, I included as data points all WRs who started more than 50% of games played in both year n and year n+2. (I excluded Jerome Simpson, 2012; Kenny Britt 2012; and Vincent Jackson, 2010, who missed games due to suspension or holdout.) Roughly 40 WRs started more than 50% of games played in a given year. In total, there were 186 year n data points. I calculated average games missed in year n+2 for several ranges of games missed in year n.

Results:

- The 102 WRs who missed 0 games in year n averaged 1.51 missed games in year n+2.

- The 84 WRs who missed 1+ games in year n averaged 2.18 missed games in year n+2.

- The 56 WRs who missed 2+ games in year n averaged 2.21 missed games in year n+2.

- The 30 WRs who missed 3+ games in year n averaged 1.70 missed games in year n+2.

- The 19 WRs who missed 4+ games in year n averaged 1.67 missed games in year n+2.

In other words, WRs who missed 1+ games in year n missed 0.67 games in year n+2 than those who missed 0 games in year n. WRs who missed 2+ games in year n missed only 0.03 more games in year n+2 than those who missed 1+. WRs who missed 3+ games in year n actually missed fewer games in year n+2 than those who missed only 1 or 2 games in year n. This is probably due to small sample size.

Edit to correct typos. Also, here's a link to the final spreadsheet: https://docs.google.com/spreadsheet/ccc?key=0Ag7bsWRQOhTEdG44U3JZMmdRcmtpNE9pa2hYa3pKdWc&usp=sharing

 
Last edited by a moderator:
IMO, just looking at games played or games missed is somewhat misleading. I think it's more useful to look at number of injuries and severity than it is straight time missed.

For example, a lot of people have mentioned that Wes Welker is tough and hasn't missed time while Danny Amendola is frail and can't stay on the field. But looking closer, if we flip flopped those two players and when a major injury occurred we may have a different perspective even if they both endured the same injuries.

Amendola suffered a season ending injury one year in the first game of the year. Welker suffered a season ending injury on the last game of the year. If we flip flopped WHEN those injuries were sustained during the season, Welker would have missed 15 games and Amendola none. But both still sustained a major injury. So the time missed was a random outcome (ie, the timing of the injuries impacted how much time was missed).

Similarly, if PLAYER X missed 10 games from a variety of injuries (say hamstring, ankle sprain, concussion, swollen knee, etc.), I think that might earn more of an injury prone label than a player that had one serious injury and nothing else, even if that player missed more games.

 
IMO, just looking at games played or games missed is somewhat misleading. I think it's more useful to look at number of injuries and severity than it is straight time missed. For example, a lot of people have mentioned that Wes Welker is tough and hasn't missed time while Danny Amendola is frail and can't stay on the field. But looking closer, if we flip flopped those two players and when a major injury occurred we may have a different perspective even if they both endured the same injuries.Amendola suffered a season ending injury one year in the first game of the year. Welker suffered a season ending injury on the last game of the year. If we flip flopped WHEN those injuries were sustained during the season, Welker would have missed 15 games and Amendola none. But both still sustained a major injury. So the time missed was a random outcome (ie, the timing of the injuries impacted how much time was missed). Similarly, if PLAYER X missed 10 games from a variety of injuries (say hamstring, ankle sprain, concussion, swollen knee, etc.), I think that might earn more of an injury prone label than a player that had one serious injury and nothing else, even if that player missed more games.
Agree that approach would be more precise. A couple points, though:First, surprisingly few starters miss 2+ games in a season, and it doesn't seem that additional missed games past 1 suggests higher injury risk. Second, the Welker/Amendola problem creates random noise, not systematic bias. If you want to code number and type of injuries, more power to you. I doubt it's worth the extra labor, though.
 
IMO, just looking at games played or games missed is somewhat misleading. I think it's more useful to look at number of injuries and severity than it is straight time missed. For example, a lot of people have mentioned that Wes Welker is tough and hasn't missed time while Danny Amendola is frail and can't stay on the field. But looking closer, if we flip flopped those two players and when a major injury occurred we may have a different perspective even if they both endured the same injuries.Amendola suffered a season ending injury one year in the first game of the year. Welker suffered a season ending injury on the last game of the year. If we flip flopped WHEN those injuries were sustained during the season, Welker would have missed 15 games and Amendola none. But both still sustained a major injury. So the time missed was a random outcome (ie, the timing of the injuries impacted how much time was missed). Similarly, if PLAYER X missed 10 games from a variety of injuries (say hamstring, ankle sprain, concussion, swollen knee, etc.), I think that might earn more of an injury prone label than a player that had one serious injury and nothing else, even if that player missed more games.
Agree that approach would be more precise. A couple points, though:First, surprisingly few starters miss 2+ games in a season, and it doesn't seem that additional missed games past 1 suggests higher injury risk. Second, the Welker/Amendola problem creates random noise, not systematic bias. If you want to code number and type of injuries, more power to you. I doubt it's worth the extra labor, though.
Agreed. I think counting injuries and assigning arbitrary "severity" values to them might improve the precision, but I doubt it would radically change the outcome, since those injuries are relatively randomly distributed throughout the year- very, very few data points will have suffered a severe injury without missing a game. Thanks for crunching the numbers. My takeaway from it is that injury "proneness" is just a very, very weak factor, which confirms what I've seen from other studies with other methodologies. The fact that all of these different approaches are finding similar results is awesome from a confirmation standpoint. I've never seen the n+2 approach, which is a novel way to look at it. My biggest concern is that by removing everyone who doesn't play at least 8 games, you're open to some selection bias- guys like Torry Holt who washed out of the league due to injury won't show up, which does tend to underrate the effect of "injury proneness" and would explain why your .67 game difference is slightly lower than other methods have produced (another possible explanation is that by using year n+1, other methods are still catching the fallout from the year n injury). Still, I don't know how to help that other than just going through every WR by hand and manually sorting, which is obviously untenable for a quick study. In all, I doubt the effect is big enough to throw the results off by too much.This is a big part of the reason why I believe the currently injured are the most underrated class of players in dynasty football. The myth (or gross exaggeration) of injury proneness, combined with hyperbolic discounting (a bias towards present value over future value) means guys who suffer a major injury can be had for far less than they are worth.
 
Results:- The 102 WRs who missed 0 games in year n averaged 1.51 missed games in year n+2.- The 84 WRs who missed 1+ games in year n averaged 2.18 missed games in year n+2.- The 56 WRs who missed 2+ games in year n averaged 2.21 missed games in year n+2.- The 30 WRs who missed 3+ games in year n averaged 1.70 missed games in year n+2.- The 19 WRs who missed 4+ games in year n averaged 1.67 missed games in year n+2.
It's interesting that guys who have minor injuries that only result in missing one or two games miss 47% more games in n+2 while guys who miss 3+ games (assuming more severe injuries) miss only 13% more games.
 
My biggest concern is that by removing everyone who doesn't play at least 8 games, you're open to some selection bias
I should have been clearer about this. I used anyone who started more than half of the games in which he appeared. For instance, Burleson played only one game in 2008, but he started, so it counts. I used this approach to try to weed out marginal players who might not appear in games even if healthy. ...and you nailed my reasoning for using a two-year gap: trying to avoid injury spillover.ETA: agree that the starter requirement could still create some sample bias. Just smaller than if I had required 8+ starts.
 
Last edited by a moderator:
Second, the Welker/Amendola problem creates random noise, not systematic bias.
I'm not sure I can agree completely here. With a large enough data set, yes. How many players are included in your sample?
You think players injured in year n are more likely to get injured earlier(later) in year n+2 than others who get injured in year n+2? How do you figure?Also, sample size is in OP.
 
Results:- The 102 WRs who missed 0 games in year n averaged 1.51 missed games in year n+2.- The 84 WRs who missed 1+ games in year n averaged 2.18 missed games in year n+2.- The 56 WRs who missed 2+ games in year n averaged 2.21 missed games in year n+2.- The 30 WRs who missed 3+ games in year n averaged 1.70 missed games in year n+2.- The 19 WRs who missed 4+ games in year n averaged 1.67 missed games in year n+2.
It's interesting that guys who have minor injuries that only result in missing one or two games miss 47% more games in n+2 while guys who miss 3+ games (assuming more severe injuries) miss only 13% more games.
My guess is a noise/sample size issue, but it could be that the 1-2 game guys have more hamstrings and ankle sprains (more likely to recur?) while the 3+ guys have torn ACLs and broken bones.
 
My biggest concern is that by removing everyone who doesn't play at least 8 games, you're open to some selection bias
I should have been clearer about this. I used anyone who started more than half of the games in which he appeared. For instance, Burleson played only one game in 2008, but he started, so it counts. I used this approach to try to weed out marginal players who might not appear in games even if healthy. ...and you nailed my reasoning for using a two-year gap: trying to avoid injury spillover.ETA: agree that the starter requirement could still create some sample bias. Just smaller than if I had required 8+ starts.
Cool, thanks for clarifying. That's a pretty reasonable filter, and it addresses another potential concern- backups and special teamers are far less likely to get injured because they appear in so many fewer plays.
Results:- The 102 WRs who missed 0 games in year n averaged 1.51 missed games in year n+2.- The 84 WRs who missed 1+ games in year n averaged 2.18 missed games in year n+2.- The 56 WRs who missed 2+ games in year n averaged 2.21 missed games in year n+2.- The 30 WRs who missed 3+ games in year n averaged 1.70 missed games in year n+2.- The 19 WRs who missed 4+ games in year n averaged 1.67 missed games in year n+2.
It's interesting that guys who have minor injuries that only result in missing one or two games miss 47% more games in n+2 while guys who miss 3+ games (assuming more severe injuries) miss only 13% more games.
My guess is a noise/sample size issue, but it could be that the 1-2 game guys have more hamstrings and ankle sprains (more likely to recur?) while the 3+ guys have torn ACLs and broken bones.
Those would be my first guesses, too. Missing one game is not the same thing as appearing in the injury report for one game- witness Brian Westbrook or Steve McNair. And also, random is random.
 
Second, the Welker/Amendola problem creates random noise, not systematic bias.
I'm not sure I can agree completely here. With a large enough data set, yes. How many players are included in your sample?
You think players injured in year n are more likely to get injured earlier(later) in year n+2 than others who get injured in year n+2? How do you figure?Also, sample size is in OP.
I have no opinion on that question, I was merely pondering the methodology.I think the sample size is not big enough to compensite for the Amendola / Welker scenario to be dismissed as "noise". However, I do think that you mitigate it by using year N+2 instead of N+1. (I missed that part on my first go-round.)To be clear, I always applaud the statistical approach, so nice post. :thumbup:
 
Second, the Welker/Amendola problem creates random noise, not systematic bias.
I'm not sure I can agree completely here. With a large enough data set, yes. How many players are included in your sample?
You think players injured in year n are more likely to get injured earlier(later) in year n+2 than others who get injured in year n+2? How do you figure?Also, sample size is in OP.
I have no opinion on that question, I was merely pondering the methodology.I think the sample size is not big enough to compensite for the Amendola / Welker scenario to be dismissed as "noise". However, I do think that you mitigate it by using year N+2 instead of N+1. (I missed that part on my first go-round.)To be clear, I always applaud the statistical approach, so nice post. :thumbup:
Thanks. I think we're on the same page. I didn't mean to dismiss Amendola/Welker when I called it noise; just that it doesn't systematically bias the results for/against finding a correlation. Noise can definitely mess things up with too small a sample.
 
IMO, just looking at games played or games missed is somewhat misleading. I think it's more useful to look at number of injuries and severity than it is straight time missed.

For example, a lot of people have mentioned that Wes Welker is tough and hasn't missed time while Danny Amendola is frail and can't stay on the field. But looking closer, if we flip flopped those two players and when a major injury occurred we may have a different perspective even if they both endured the same injuries.

Amendola suffered a season ending injury one year in the first game of the year. Welker suffered a season ending injury on the last game of the year. If we flip flopped WHEN those injuries were sustained during the season, Welker would have missed 15 games and Amendola none. But both still sustained a major injury. So the time missed was a random outcome (ie, the timing of the injuries impacted how much time was missed).

Similarly, if PLAYER X missed 10 games from a variety of injuries (say hamstring, ankle sprain, concussion, swollen knee, etc.), I think that might earn more of an injury prone label than a player that had one serious injury and nothing else, even if that player missed more games.
Agree that approach would be more precise. A couple points, though:First, surprisingly few starters miss 2+ games in a season, and it doesn't seem that additional missed games past 1 suggests higher injury risk.

Second, the Welker/Amendola problem creates random noise, not systematic bias.

If you want to code number and type of injuries, more power to you. I doubt it's worth the extra labor, though.
Statistics are great but I think people often miss the mark in interpretation. Often, this is because they're expecting to find something specific and only think along those lines. For example, you were hoping to find a correlation between games missed and injury the next season. But, if a player suffers a significant injury and misses a larger number of games, they take a lot less wear and tear.
 
IMO, just looking at games played or games missed is somewhat misleading. I think it's more useful to look at number of injuries and severity than it is straight time missed.

For example, a lot of people have mentioned that Wes Welker is tough and hasn't missed time while Danny Amendola is frail and can't stay on the field. But looking closer, if we flip flopped those two players and when a major injury occurred we may have a different perspective even if they both endured the same injuries.

Amendola suffered a season ending injury one year in the first game of the year. Welker suffered a season ending injury on the last game of the year. If we flip flopped WHEN those injuries were sustained during the season, Welker would have missed 15 games and Amendola none. But both still sustained a major injury. So the time missed was a random outcome (ie, the timing of the injuries impacted how much time was missed).

Similarly, if PLAYER X missed 10 games from a variety of injuries (say hamstring, ankle sprain, concussion, swollen knee, etc.), I think that might earn more of an injury prone label than a player that had one serious injury and nothing else, even if that player missed more games.
Agree that approach would be more precise. A couple points, though:First, surprisingly few starters miss 2+ games in a season, and it doesn't seem that additional missed games past 1 suggests higher injury risk.

Second, the Welker/Amendola problem creates random noise, not systematic bias.

If you want to code number and type of injuries, more power to you. I doubt it's worth the extra labor, though.
Statistics are great but I think people often miss the mark in interpretation. Often, this is because they're expecting to find something specific and only think along those lines. For example, you were hoping to find a correlation between games missed and injury the next season. But, if a player suffers a significant injury and misses a larger number of games, they take a lot less wear and tear.
I'm not convinced that workload or "mileage" or "wear and tear" are meaningful predictors of future injury.
 
Statistics are great but I think people often miss the mark in interpretation. Often, this is because they're expecting to find something specific and only think along those lines. For example, you were hoping to find a correlation between games missed and injury the next season. But, if a player suffers a significant injury and misses a larger number of games, they take a lot less wear and tear.
One of these statements is evidence of the other's truth. ;)
 
I think lumping all injuries together creates a problem. A freak broken leg isn't likely to be repeated. A torn tricep or pectoral muscle do actually lead to a weakening of those muscles and increase the likelihood of future injury.

There are also guys that have bad knees and are likely to end up missing time due to more knee problems. Some guys don't stretch enough or eat right and we see repeated hamstring/groin/quad pulls. Some guys have had enough ankle sprains that the ligaments in their ankles are loose and lead to more sprains. And then there's concussions.

So yeah, I think the type of injury matters a lot. Due to the type of injury, I would argue that some guys are more prone to missing future games. It's not a guarantee that they will, it's just physiologically more likely that they will than someone that hasn't had that type of injury.

Comparing guys with a broken leg to guys with torn ACLs to guys with a torn meniscus to guys with concussions makes no sense to me.

It's why making blanket statements on injury issues is goofy. Analyze the specific player and injury along with his injury history. That's much more meaningful than a bunch of overly broad statistics.

When I'm looking at a guy like Cecil Shorts, I don't really care how many games Adrian Peterson, Danny Amendola or Julio Jones missed. I want to know the statistics for players that also missed time due to 2 concussions within 6 months. That's much more meaningful to me.

 
When I'm looking at a guy like Cecil Shorts, I don't really care how many games Adrian Peterson, Danny Amendola or Julio Jones missed. I want to know the statistics for players that also missed time due to 2 concussions within 6 months. That's much more meaningful to me.
So you're talking about a sample size of, what, 10-15? --maybe smaller when you consider recent changes in treatment. Those stats would be useless.
 
When I'm looking at a guy like Cecil Shorts, I don't really care how many games Adrian Peterson, Danny Amendola or Julio Jones missed. I want to know the statistics for players that also missed time due to 2 concussions within 6 months. That's much more meaningful to me.
So you're talking about a sample size of, what, 10-15? --maybe smaller when you consider recent changes in treatment. Those stats would be useless.
And yet they'd still be more meaningful than comparing Cecil Shorts to Julio Jones or Maurice Jones-Drew.
 
Last edited by a moderator:
A torn tricep or pectoral muscle do actually lead to a weakening of those muscles and increase the likelihood of future injury.There are also guys that have bad knees and are likely to end up missing time due to more knee problems. Some guys don't stretch enough or eat right and we see repeated hamstring/groin/quad pulls. Some guys have had enough ankle sprains that the ligaments in their ankles are loose and lead to more sprains. And then there's concussions.
This is a pretty good list of the most common impactful injuries to WRs. And yet, it seems that WRs who miss games due to injury in year n miss only 0.6 more games in year n+2. Do you have a method to identify which guys have bad knees/poor stretching habits and which ones have bad luck? If not, these stats give us a decent idea of what to expect.
 
A torn tricep or pectoral muscle do actually lead to a weakening of those muscles and increase the likelihood of future injury.There are also guys that have bad knees and are likely to end up missing time due to more knee problems. Some guys don't stretch enough or eat right and we see repeated hamstring/groin/quad pulls. Some guys have had enough ankle sprains that the ligaments in their ankles are loose and lead to more sprains. And then there's concussions.
This is a pretty good list of the most common impactful injuries to WRs. And yet, it seems that WRs who miss games due to injury in year n miss only 0.6 more games in year n+2. Do you have a method to identify which guys have bad knees/poor stretching habits and which ones have bad luck? If not, these stats give us a decent idea of what to expect.
I don't really have a full list at this point, but a few guys off the top of my head:Stevie Johnson. He has had problems with pulled leg muscles twice in two years now and there have been quite a few rumblings of poor offseason conditioning.Steven Jackson: The guy is a warrior, but he had his knee drained multiple times. That's not good.Cecil Shorts: Two concussions in one season is not good at allMJD: Anyone missing that many games due to a LisFranc injury has to have red flagsAhmad Bradshaw: Foot injury after foot injury for a RB is a major red flagAnother Buffalo guy on the other side of the ball is DT Kyle Williams. Two years in a row he's had major problems with bone spurs rubbing his achilles. First on one foot, then the other. That doesn't seem promising.
 

Users who are viewing this thread

Top