What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

The Rule of 26-27-60 (1 Viewer)

Faust

MVP
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness? And Ryan Leaf's and David Carr's and other failed, high-pick quarterbacks?

Call it the Rule of 26-27-60.

Here is the gist of it: If an NFL prospect scores at least a 26 on the Wonderlic test, starts at least 27 games in his college career and completes at least 60 percent of his passes, there's a good chance he will succeed at the NFL level.

There are, of course, exceptions. If NFL general managers always could measure heart, determination and other intangibles, then Tom Brady would not have been drafted in the sixth round.

But short of breaking down tape, conducting personal interviews and analyzing every number and every snap of every game, remember the Rule of 26-27-60 the next time a hotshot prospect comes down the pike.

Since 1998, these are some of the NFL quarterbacks who aced all three parts of the Rule of 26-27-60: Peyton Manning, Phillip Rivers, Eli Manning, Tony Romo, Matt Schaub, Kyle Orton, Kevin Kolb, Matt Ryan, Ryan Fitzpatrick, Mark Sanchez and Matt Stafford.

(see article for table here)

Meanwhile, among the once highly-touted prospects who failed at least one part of the formula: Ryan Leaf, Joey Harrington, Michael Vick, Akili Smith, Tim Couch, Daunte Culpepper, David Carr, Vince Young and JaMarcus Russell.

(see article for table here)

There are a few notable exceptions to the rule but only by slight margins. Drew Brees started 26 instead of 27 games at Purdue, but fit the formula in every other way. Two-time Super Bowl champ Ben Roethlisberger scored a 25 on the Wonderlic, just one point short of the standard of 26. Jay Cutler -- a mixed-bag thus far in the NFL -- scored exactly a 26 on his Wonderlic and had the starts, but completed 57 percent of his passes at Vanderbilt.

(see article for table here)

How about the quarterback class of 2010? Top pick Sam Bradford aces the rule easily, but the other three high-profile rookie QBs -- the Browns' Colt McCoy, the Broncos' Tim Tebow and the Panthers' Jimmy Clausen -- all fall short on the Wonderlic.

(see article for table here)

It stands to reason why the Rule of 26-27-60 makes the most sense as a quick guide to NFL quarterbacking success, too.

The 26 represents the minimum Wonderlic score required to score a passing grade. Consider some of the lower-scoring quarterbacks drafted since 1998 when it comes to the Wonderlic: Vick (who scored a 20), Akili Smith (26), Couch (22), Carr (24), Young (16, first reported as a six) and Russell (24). All of them have been considered at best under-achievers, at worst busts.

The most notable exceptions to the rule are Brett Favre, who scored a reported 22 on the Wonderlic, and Donovan McNabb, who scored a reported 14.

The 27 represents the minimum number of starts a quarterbacking draft prospect should have had in college to make the grade. Ask any NFL scout if he would rather have 12 games to grade or 27. Playing a lot of games means more opportunity to hone your craft in the heat of battle and gain confidence in your ability to perform under pressure. That translates well to the next level. Oregon's Akili Smith was drafted in 1999 after making just 11 collegiate starts. He ultimately made just 17 starts in Cincinnati.

And how many quarterbacks, like Leaf and Russell, have been drafted based on "upside." That is another way of saying a player couldn't complete 60-percent in college. Do you really think he can do it at the next level?

The exceptions are few. Finding NFL quarterbacks certainly is a science, but it's not rocket science. When in doubt, turn to the Rule of 26-27-60.

 
Last edited by a moderator:
These guys took FootballOutsider's research and added in the Wunderlic. They should have mentioned that.

 
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness? And Ryan Leaf's and David Carr's and other failed, high-pick quarterbacks?

Call it the Rule of 26-27-60.

Here is the gist of it: If an NFL prospect scores at least a 26 on the Wonderlic test, starts at least 27 games in his college career and completes at least 60 percent of his passes, there's a good chance he will succeed at the NFL level.

There are, of course, exceptions. If NFL general managers always could measure heart, determination and other intangibles, then Tom Brady would not have been drafted in the sixth round.

But short of breaking down tape, conducting personal interviews and analyzing every number and every snap of every game, remember the Rule of 26-27-60 the next time a hotshot prospect comes down the pike.

Since 1998, these are some of the NFL quarterbacks who aced all three parts of the Rule of 26-27-60: Peyton Manning, Phillip Rivers, Eli Manning, Tony Romo, Matt Schaub, Kyle Orton, Kevin Kolb, Matt Ryan, Ryan Fitzpatrick, Mark Sanchez and Matt Stafford.

(see article for table here)

Meanwhile, among the once highly-touted prospects who failed at least one part of the formula: Ryan Leaf, Joey Harrington, Michael Vick, Akili Smith, Tim Couch, Daunte Culpepper, David Carr, Vince Young and JaMarcus Russell.

(see article for table here)

There are a few notable exceptions to the rule but only by slight margins. Drew Brees started 26 instead of 27 games at Purdue, but fit the formula in every other way. Two-time Super Bowl champ Ben Roethlisberger scored a 25 on the Wonderlic, just one point short of the standard of 26. Jay Cutler -- a mixed-bag thus far in the NFL -- scored exactly a 26 on his Wonderlic and had the starts, but completed 57 percent of his passes at Vanderbilt.

(see article for table here)

How about the quarterback class of 2010? Top pick Sam Bradford aces the rule easily, but the other three high-profile rookie QBs -- the Browns' Colt McCoy, the Broncos' Tim Tebow and the Panthers' Jimmy Clausen -- all fall short on the Wonderlic.

(see article for table here)

It stands to reason why the Rule of 26-27-60 makes the most sense as a quick guide to NFL quarterbacking success, too.

The 26 represents the minimum Wonderlic score required to score a passing grade. Consider some of the lower-scoring quarterbacks drafted since 1998 when it comes to the Wonderlic: Vick (who scored a 20), Akili Smith (26), Couch (22), Carr (24), Young (16, first reported as a six) and Russell (24). All of them have been considered at best under-achievers, at worst busts.

The most notable exceptions to the rule are Brett Favre, who scored a reported 22 on the Wonderlic, and Donovan McNabb, who scored a reported 14.

The 27 represents the minimum number of starts a quarterbacking draft prospect should have had in college to make the grade. Ask any NFL scout if he would rather have 12 games to grade or 27. Playing a lot of games means more opportunity to hone your craft in the heat of battle and gain confidence in your ability to perform under pressure. That translates well to the next level. Oregon's Akili Smith was drafted in 1999 after making just 11 collegiate starts. He ultimately made just 17 starts in Cincinnati.

And how many quarterbacks, like Leaf and Russell, have been drafted based on "upside." That is another way of saying a player couldn't complete 60-percent in college. Do you really think he can do it at the next level?

The exceptions are few. Finding NFL quarterbacks certainly is a science, but it's not rocket science. When in doubt, turn to the Rule of 26-27-60.
Sounds like another one of those statistics that is twisted until they find something that appears to mean something. With that said, it is still a fun little stat.
 
This thread has to have some "what about?s", so I'll start:

Wonderlic

Matt Leinart 35

Brett Favre 22

Dan Marino 16

 
The 26 threshold for the Wonderlic score is explained, and the 60% completion percentage cutoff seems reasonable, since it is a round number and is sometimes used as a barometer for good completion percentage.

But why is 27 starts the specific threshold for that element of the criteria? Unless that can be explained, it seems like it might be curve fitting to use that specific number. Is it the case that college teams nowadays tend to play an average of 13.5 games per year, meaning it is equivalent to 2 years of starting? But what about teams that don't make conference championship and/or bowl games? Wouldn't they have fewer than 27 games over 2 seasons? It seems like a number that will generally require college QBs to start games in 3 or more college seasons. I'm sure that is a great indicator of possible NFL success, since it's intuitive that being good enough to start in 3+ seasons is a reasonable indicator of talent/skill... and a larger sample size of 3+ seasons is always better than a smaller sample size, from an evaluation standpoint. But using an odd number like 27 seems fishy to me.

 
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness? And Ryan Leaf's and David Carr's and other failed, high-pick quarterbacks?

Call it the Rule of 26-27-60.

Here is the gist of it: If an NFL prospect scores at least a 26 on the Wonderlic test, starts at least 27 games in his college career and completes at least 60 percent of his passes, there's a good chance he will succeed at the NFL level.

There are, of course, exceptions. If NFL general managers always could measure heart, determination and other intangibles, then Tom Brady would not have been drafted in the sixth round.

But short of breaking down tape, conducting personal interviews and analyzing every number and every snap of every game, remember the Rule of 26-27-60 the next time a hotshot prospect comes down the pike.

Since 1998, these are some of the NFL quarterbacks who aced all three parts of the Rule of 26-27-60: Peyton Manning, Phillip Rivers, Eli Manning, Tony Romo, Matt Schaub, Kyle Orton, Kevin Kolb, Matt Ryan, Ryan Fitzpatrick, Mark Sanchez and Matt Stafford.

(see article for table here)

Meanwhile, among the once highly-touted prospects who failed at least one part of the formula: Ryan Leaf, Joey Harrington, Michael Vick, Akili Smith, Tim Couch, Daunte Culpepper, David Carr, Vince Young and JaMarcus Russell.

(see article for table here)

There are a few notable exceptions to the rule but only by slight margins. Drew Brees started 26 instead of 27 games at Purdue, but fit the formula in every other way. Two-time Super Bowl champ Ben Roethlisberger scored a 25 on the Wonderlic, just one point short of the standard of 26. Jay Cutler -- a mixed-bag thus far in the NFL -- scored exactly a 26 on his Wonderlic and had the starts, but completed 57 percent of his passes at Vanderbilt.

(see article for table here)

How about the quarterback class of 2010? Top pick Sam Bradford aces the rule easily, but the other three high-profile rookie QBs -- the Browns' Colt McCoy, the Broncos' Tim Tebow and the Panthers' Jimmy Clausen -- all fall short on the Wonderlic.

(see article for table here)

It stands to reason why the Rule of 26-27-60 makes the most sense as a quick guide to NFL quarterbacking success, too.

The 26 represents the minimum Wonderlic score required to score a passing grade. Consider some of the lower-scoring quarterbacks drafted since 1998 when it comes to the Wonderlic: Vick (who scored a 20), Akili Smith (26), Couch (22), Carr (24), Young (16, first reported as a six) and Russell (24). All of them have been considered at best under-achievers, at worst busts.

The most notable exceptions to the rule are Brett Favre, who scored a reported 22 on the Wonderlic, and Donovan McNabb, who scored a reported 14.

The 27 represents the minimum number of starts a quarterbacking draft prospect should have had in college to make the grade. Ask any NFL scout if he would rather have 12 games to grade or 27. Playing a lot of games means more opportunity to hone your craft in the heat of battle and gain confidence in your ability to perform under pressure. That translates well to the next level. Oregon's Akili Smith was drafted in 1999 after making just 11 collegiate starts. He ultimately made just 17 starts in Cincinnati.

And how many quarterbacks, like Leaf and Russell, have been drafted based on "upside." That is another way of saying a player couldn't complete 60-percent in college. Do you really think he can do it at the next level?

The exceptions are few. Finding NFL quarterbacks certainly is a science, but it's not rocket science. When in doubt, turn to the Rule of 26-27-60.
:lmao: Fun read, and possibly useful for those "on the fence" decisions.

 
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness?
No. The formula is only useful in retrospect. There's no difference between this article and an article saying "draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf."Could a simple formula help you draft? Sure, just draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf.

 
This is a case of solving for a formula that fits your world view, versus finding a formula with predictive value. Pure rubbish IMHO.

I do think underlying the "formula" are some rather logical determinations, which is to say:

1) Experience reduces risk -- The larger a body of work to evaluate (and experience for the player executing his skills), the lower the risk of completely misreading a player

2) Wonderlic -- This one leaves me hanging for the obvious exceptions, but all things being equal of course you want a player to be smart and able to solve problems mentally under duress.

3) High completion rate -- In today's college game, a prospect that completes less than 60% of his passes is likely a flawed, raw player that you'll need to project into an NFL style system down the road OR playing with such an awful supporting cast that it, again, gets hard to really see the player for what he is versus the situation he was put in

 
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness?
No. The formula is only useful in retrospect. There's no difference between this article and an article saying "draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf."Could a simple formula help you draft? Sure, just draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf.
You have to love how all these "formulas" end up listing Culpepper, Vick and McNabb in with Leaf, Couch and Smith. Yeah- those first three really didn't do anything in their careers.

 
This is a case of solving for a formula that fits your world view, versus finding a formula with predictive value. Pure rubbish IMHO.I do think underlying the "formula" are some rather logical determinations, which is to say:1) Experience reduces risk -- The larger a body of work to evaluate (and experience for the player executing his skills), the lower the risk of completely misreading a player2) Wonderlic -- This one leaves me hanging for the obvious exceptions, but all things being equal of course you want a player to be smart and able to solve problems mentally under duress. 3) High completion rate -- In today's college game, a prospect that completes less than 60% of his passes is likely a flawed, raw player that you'll need to project into an NFL style system down the road OR playing with such an awful supporting cast that it, again, gets hard to really see the player for what he is versus the situation he was put in
Curious how you call it "pure rubbish" then go on to say all 3 parts have merit. Strange. And if a rookie QB passes all 3 criteria, he should have a better chance at NFL success then if he were lacking in 1 or 2.Not saying I'd draft purely on these 3 stats, but its something to look at.
 
But using an odd number like 27 seems fishy to me.
27 is a round number in base three, and three is the combined number of eyes and noses that every single Hall of Fame QB has. So there you go.Either that, or there is an unsuccessful NFL QB who had a 60% completion rate, a high wunderlic score, and 26 college starts.

I'll bet you $7.19 it's the latter.

 
This is a case of solving for a formula that fits your world view, versus finding a formula with predictive value. Pure rubbish IMHO.I do think underlying the "formula" are some rather logical determinations, which is to say:1) Experience reduces risk -- The larger a body of work to evaluate (and experience for the player executing his skills), the lower the risk of completely misreading a player2) Wonderlic -- This one leaves me hanging for the obvious exceptions, but all things being equal of course you want a player to be smart and able to solve problems mentally under duress. 3) High completion rate -- In today's college game, a prospect that completes less than 60% of his passes is likely a flawed, raw player that you'll need to project into an NFL style system down the road OR playing with such an awful supporting cast that it, again, gets hard to really see the player for what he is versus the situation he was put in
This is why I post the articles from other sites -- I very much appreciate the debate and analysis that comes from the Shark Pool. The article struck me as being one of those ones that proves your point "after the fact" and with a dose of careful manipulation of your cut-off points.
 
This is a case of solving for a formula that fits your world view, versus finding a formula with predictive value. Pure rubbish IMHO.I do think underlying the "formula" are some rather logical determinations, which is to say:1) Experience reduces risk -- The larger a body of work to evaluate (and experience for the player executing his skills), the lower the risk of completely misreading a player2) Wonderlic -- This one leaves me hanging for the obvious exceptions, but all things being equal of course you want a player to be smart and able to solve problems mentally under duress. 3) High completion rate -- In today's college game, a prospect that completes less than 60% of his passes is likely a flawed, raw player that you'll need to project into an NFL style system down the road OR playing with such an awful supporting cast that it, again, gets hard to really see the player for what he is versus the situation he was put in
Curious how you call it "pure rubbish" then go on to say all 3 parts have merit. Strange. And if a rookie QB passes all 3 criteria, he should have a better chance at NFL success then if he were lacking in 1 or 2.Not saying I'd draft purely on these 3 stats, but its something to look at.
Sure, if a QB passes all 3 criteria, he should have a better chance of success than if he were lacking. That would also be true if the three criteria were arm strength, height and work ethic.
 
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness?
No. The formula is only useful in retrospect. There's no difference between this article and an article saying "draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf."Could a simple formula help you draft? Sure, just draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf.
How is this any different than virtually every historical study done by FBGs?I still prefer the LCF developed by Football Outsiders since their model adds the requirement that a QB be drafted in the first or second round. The SI study apparently seeks to replace the task of scouting whereas the LCF doesn't apply to QBs drafted in the 3rd round or later. IMO the LCF is more of a secondary filter beyond draft status that helps you determine if scouts had enough data to evaluate a QB or did a single team make a mistake in drafting a player off of a limited data set or measurables or whatnot. The LCF also doesn't have specific benchmarks, but is of a "more is better" nature.

 
Last edited by a moderator:
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness?
No. The formula is only useful in retrospect. There's no difference between this article and an article saying "draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf."Could a simple formula help you draft? Sure, just draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf.
How is this any different than virtually every historical study done by FBGs?
Because historical studies by FBG aren't based on data snooping.http://en.wikipedia.org/wiki/Data-snooping_bias

 
Trent Edwards better watch out

Ryan Fitzpatrick 48 wonderlic 25 starts 59.9 comp pct

Brian Brohm 32 wonderlic 33 starts 65 comp pct

 
Trent Edwards better watch outRyan Fitzpatrick 48 wonderlic 25 starts 59.9 comp pctBrian Brohm 32 wonderlic 33 starts 65 comp pct
I know you're being facetious, but I actually think Brohm could start in Buffalo and ultimately be a much better option than Trent or Fitzy. <_<
 
I see a lot of people dumping on this study....and for good reasons.

But I also see that the basic premises are essentially correct. More starts = more experiance = better prepared = more success. Higher wonderlic = better able to quickly assimilate data = better able to read defenses and adjust to complex schemes = more success. Higher conmpletion % = more accurate passer = more success.

In other news, the sky is blue and grass is usually some shade of green....but there are exceptions to both.

 
JaxBill said:
Trent Edwards better watch outRyan Fitzpatrick 48 wonderlic 25 starts 59.9 comp pctBrian Brohm 32 wonderlic 33 starts 65 comp pct
And Brohm was a 2nd round pick too, satisfying the other Football Outsiders requirement (Fitz was a UDFA and Edwards a 4th rounder)."That's gold, Jerry! Gold!"
 
I see a lot of people dumping on this study....and for good reasons.But I also see that the basic premises are essentially correct. More starts = more experiance = better prepared = more success. Higher wonderlic = better able to quickly assimilate data = better able to read defenses and adjust to complex schemes = more success. Higher conmpletion % = more accurate passer = more success.In other news, the sky is blue and grass is usually some shade of green....but there are exceptions to both.
People are dumping on this 'study' because its not really a study. Their use of arbitrary cutoff lines clues you in. Real studies do things like regression analysis to figure out how strong the correlation between performance and variables are. This study just takes a few variables and finds cutoffs that make one list of QBs look better than another, and then calls it a rule to make it sound catchy, while not even remotely demonstrating that these three are the best- or even good- predictors of success.
 
Jason Wood said:
JaxBill said:
Trent Edwards better watch outRyan Fitzpatrick 48 wonderlic 25 starts 59.9 comp pctBrian Brohm 32 wonderlic 33 starts 65 comp pct
I know you're being facetious, but I actually think Brohm could start in Buffalo and ultimately be a much better option than Trent or Fitzy. :shrug:
He will likely be starting before the end of the year, though probably not at the beginning, as they try to find someone who will have success behind that woeful OL. I think all three will start at some point.Unfortunately, he's unlikely to light it up given the weak overall offense, and I'm guessing the QBOTF is next year's 1st round pick.
 
Last edited by a moderator:
My suspicions of McNabb's lack of brain power and quirkiness (and the possible ramifications for his departure from the city of Brotherly Love) seem confirmed ...

 
Chase Stuart said:
guderian said:
Chase Stuart said:
Faust said:
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness?
No. The formula is only useful in retrospect. There's no difference between this article and an article saying "draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf."Could a simple formula help you draft? Sure, just draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf.
How is this any different than virtually every historical study done by FBGs?
Because historical studies by FBG aren't based on data snooping.http://en.wikipedia.org/wiki/Data-snooping_bias
"Data-snooping bias can occur when researchers either do not form a hypothesis in advance or narrow the data used to reduce the probability of the sample refuting a specific hypothesis." If you say so. I'm not here to criticize what FBGs does. Historical statistical analysis can only go so far in a sport with only 16 games providing a limited data set. Its a flaw that we have to live with or go play fantasy baseball. If you limit your data back to 1998 like they did, you open yourself to accusations that you didn't use enough data. If you extend your data back to the 70s and 80s, you come across as too naive to recognize the changes in the sport since then and appear to be a hostage to the data. What SI did was crank through data and come up with benchmarks, what you do is typically to say "how did players do historically that were in a similar, definable situation." One could make the argument that both analyses are only applicable in retrospect. Like I said, I'm not here to criticize FBGs, but if you're going to criticize what SI did you need to do better than to say that historical anlayses and formulas only work in retrospect.
 
Last edited by a moderator:
Steed said:
These guys took FootballOutsider's research and added in the Wunderlic. They should have mentioned that.
:kicksrock:Whatever you think of FO's work on this, the worst thing about the article is its failure to cite its obvious inspiration.
 
If a player is drafted by a team that starts with "A", "P", "S", or "N" on a date when it rains despite being over 65 degrees, and there is a full moon, that quarterback will be a good, solid NFL player.

What a dumb idea.

 
"Data-snooping bias can occur when researchers either do not form a hypothesis in advance or narrow the data used to reduce the probability of the sample refuting a specific hypothesis." If you say so. I'm not here to criticize what FBGs does. Historical statistical analysis can only go so far in a sport with only 16 games providing a limited data set. Its a flaw that we have to live with or go play fantasy baseball. If you limit your data back to 1998 like they did, you open yourself to accusations that you didn't use enough data. If you extend your data back to the 70s and 80s, you come across as too naive to recognize the changes in the sport since then and appear to be a hostage to the data. What SI did was crank through data and come up with benchmarks, what you do is typically to say "how did players do historically that were in a similar, definable situation." One could make the argument that both analyses are only applicable in retrospect. Like I said, I'm not here to criticize FBGs, but if you're going to criticize what SI did you need to do better than to say that historical anlayses and formulas only work in retrospect.
Those are really big differences. By cranking through the data and then coming up with benchmarks, you're answering the question. Here's a great article Maurile wrote on the topic:http://subscribers.footballguys.com/2007/0...ay_curse370.php

There is a problem with testing hypotheses like the "Curse of 370." Such hypotheses are typically formed using all the data currently available - which means that there are no fresh data left to test them on. It is a fundamental rule of hypothesis-testing that, whenever possible, you should not use the same data to both formulate and test your hypothesis. A short example will illustrate why this is so.

Suppose I roll a six-sided die 100 times and analyze the results. I will be able to find many patterns in the results of those 100 rolls. I may find, for example, that a three was followed by a six 40% of the time, or that a one was never followed by a six.

Would you trust any such patterns to hold true over the next hundred rolls? You shouldn't. If they do, it would just be coincidence. It is easy to find patterns by looking for them in a given set of data; but the test of whether those patterns are meaningful is whether they hold true in data that have not yet been examined.

So if the "Curse of 370" hypothesis was formed using data available up through the 2003 season, it should be tested only on data from 2004 and later.

The problem is that we are left with too small a sample of data to meaningfully test it on. Since 2003, only three RBs have played seasons following up a 370+ carry season - Jamal Lewis in 2004, Curtis Martin in 2005, and Shaun Alexander in 2006. (Ricky Williams had 392 carries in 2003, but did not play a follow-up season. If he had missed the 2004 season due to injury, it would make sense to include him in our data set; but since he missed the 2004 season due to retirement, which was almost certainly unrelated to his number of carries in 2003, his 2004 nonperformance is just noise.) Lewis, Martin, and Alexander all had terrible follow-up seasons, far underperforming the median from the group of 9 RBs coming off seasons with 344-369 carries during that period. So the "Curse of 370" theory is currently going three for three. The problem is that going three for three is not a sufficient track record to be considered confirmed in a statistically significant sense.

The most commonly used standard of statistical significance is about five percent, or two standard errors, which means you will get a false positive in about one out of every 20 tests, on average. Whether this standard is the appropriate one to use for evaluating the "Curse of 370" will be discussed below. For now, suffice it to say that the standard cannot be satisfied with a sample of three players.

But do we really have to limit ourselves to data from 2004 and beyond? It is preferable, but is it absolutely necessary? What if all 25 of the RBs who had ever played follow-up seasons to 370-carry years had underperformed the median from the 344-369-carry group? There must be a point where the pattern is so strong that we would be justified in accepting the Curse on the basis of "back-testing" it against previously known data, right?

Right. If the most commonly used standard of statistical significance when testing a hypothesis against fresh data is two standard errors, the typical standard when testing it against previously known data is four standard errors. ("Think of it as two standard errors to develop the hypothesis, and then two more to test it," writes Stanford Wong in his book, Sharp Sports Betting.)
There is no difference between this study (or FO's study) that says 26-27-60=good and a study that says 40% of the time you roll a 3, a 6 follows. Want proof? Look at this set of data!You can't formulate and test your hypothesis off of the same set of data. That's why it's bunk. And that's not what FBG does. If the 26-27-60 people had used one set of data to formulate their theory, and then tested that theory on another set of data, and found the results held, *then* it would be legitimate. But that's not what they did.

 
Chase Stuart said:
guderian said:
Chase Stuart said:
Faust said:
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness?
No. The formula is only useful in retrospect. There's no difference between this article and an article saying "draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf."Could a simple formula help you draft? Sure, just draft Peyton Manning and Carson Palmer, don't draft JaMarcus Russell and Ryan Leaf.
How is this any different than virtually every historical study done by FBGs?
Because historical studies by FBG aren't based on data snooping.http://en.wikipedia.org/wiki/Data-snooping_bias
"Data-snooping bias can occur when researchers either do not form a hypothesis in advance or narrow the data used to reduce the probability of the sample refuting a specific hypothesis." If you say so. I'm not here to criticize what FBGs does. Historical statistical analysis can only go so far in a sport with only 16 games providing a limited data set. Its a flaw that we have to live with or go play fantasy baseball. If you limit your data back to 1998 like they did, you open yourself to accusations that you didn't use enough data. If you extend your data back to the 70s and 80s, you come across as too naive to recognize the changes in the sport since then and appear to be a hostage to the data. What SI did was crank through data and come up with benchmarks, what you do is typically to say "how did players do historically that were in a similar, definable situation." One could make the argument that both analyses are only applicable in retrospect. Like I said, I'm not here to criticize FBGs, but if you're going to criticize what SI did you need to do better than to say that historical anlayses and formulas only work in retrospect.
I think you're misunderstanding Chase. His point isn't that data analysis only work in retrospect. His point is that they did a flawed data analysis. One that is biased towards their data set such that their data set is the only one you can trust it to be useful with.So when he said, "The formula is only useful in retrospect," he meant, "The formula is only useful on the data set it was created from". Chase can correct me if I'm incorrectly putting words in his mouth.

Edit to add: And of course Chase might have already spoken for himself before I got off the phone and finished this post. :thumbup:

 
Last edited by a moderator:
Faust said:
http://sportsillustrated.cnn.com/2010/writ...rule/index.html

John P. Lopez>INSIDE THE NFL

Perhaps we should not be stunned by JaMarcus Russell's utter flop as an NFL quarterback -- low-lighted this week by his arrest for possession of a controlled substance in Alabama.

But could a simple formula have warned us of Russell's lack of NFL readiness? And Ryan Leaf's and David Carr's and other failed, high-pick quarterbacks?

Call it the Rule of 26-27-60.

Here is the gist of it: If an NFL prospect scores at least a 26 on the Wonderlic test, starts at least 27 games in his college career and completes at least 60 percent of his passes, there's a good chance he will succeed at the NFL level.

There are, of course, exceptions. If NFL general managers always could measure heart, determination and other intangibles, then Tom Brady would not have been drafted in the sixth round.

But short of breaking down tape, conducting personal interviews and analyzing every number and every snap of every game, remember the Rule of 26-27-60 the next time a hotshot prospect comes down the pike.

Since 1998, these are some of the NFL quarterbacks who aced all three parts of the Rule of 26-27-60: Peyton Manning, Phillip Rivers, Eli Manning, Tony Romo, Matt Schaub, Kyle Orton, Kevin Kolb, Matt Ryan, Ryan Fitzpatrick, Mark Sanchez and Matt Stafford.
Sanchez only started 16 college games. Someone screwed up their data.
 
Last edited by a moderator:
There is no difference between this study (or FO's study) that says 26-27-60=good and a study that says 40% of the time you roll a 3, a 6 follows. Want proof? Look at this set of data!

You can't formulate and test your hypothesis off of the same set of data. That's why it's bunk. And that's not what FBG does. If the 26-27-60 people had used one set of data to formulate their theory, and then tested that theory on another set of data, and found the results held, *then* it would be legitimate. But that's not what they did.
The Freshman stats article is nice, but when do you formulate your hypothesis using one set of data and test it on another? Article 1

You used one set of data to conclude "Based off some historical comparisons, it appears that a healthy Brees has a fantasy floor that's extremely high (top 10 in fantasy points per game) and a good chance to boast elite fantasy numbers (three players finished as QB1 in their 5th season)." I'm not criticizing your work, because your alternative would be to do something silly like come up with an estimated ranking based on the QBs that were in the AFC and to test it using QBs that played in the NFC. Unfortunately, if you did that you'd have a sample size in the single digits.

Article 2

You used one set of data to conclude "it stands to reason that Chicago should rank in the top five in pass attempts, and based on Cutler's skill level, in the top five in yards."

Those were just the first two examples that I looked at. In both cases you took a single set of data, crunched it and came to a conclusion. SI came up with 26/27/60 and you came up with Brees having a floor of top 10 and a good chance at "elite" that you didn't define and that Chicago should rank top 5 in pass attempts and Cutler would be top 5 in yards.

The stats theory is nice, but their 26/27/60 is similar to your top 10/elite and top 5/5.

 
Last edited by a moderator:
I think you're misunderstanding Chase. His point isn't that data analysis only work in retrospect. His point is that they did a flawed data analysis. One that is biased towards their data set such that their data set is the only one you can trust it to be useful with.So when he said, "The formula is only useful in retrospect," he meant, "The formula is only useful on the data set it was created from". Chase can correct me if I'm incorrectly putting words in his mouth.Edit to add: And of course Chase might have already spoken for himself before I got off the phone and finished this post. :)
That's all fine and dandy and I agree with it, but he's holding SI to a standard that he doesn't adhere to himself. I don't have a problem with that because in football we're typically limited to data sets in the 10s, but in the first two articles I looked at there was no independently determined hypothesis that was subsequently tested on a different set of data. I really don't want to come off as critical to FBGs--it's not their fault, but the sport that we've chosen to fool with isn't typically amenable to the application of rigorous statistical methods.
 
Last edited by a moderator:
"Data-snooping bias can occur when researchers either do not form a hypothesis in advance or narrow the data used to reduce the probability of the sample refuting a specific hypothesis." If you say so. I'm not here to criticize what FBGs does. Historical statistical analysis can only go so far in a sport with only 16 games providing a limited data set. Its a flaw that we have to live with or go play fantasy baseball. If you limit your data back to 1998 like they did, you open yourself to accusations that you didn't use enough data. If you extend your data back to the 70s and 80s, you come across as too naive to recognize the changes in the sport since then and appear to be a hostage to the data. What SI did was crank through data and come up with benchmarks, what you do is typically to say "how did players do historically that were in a similar, definable situation." One could make the argument that both analyses are only applicable in retrospect. Like I said, I'm not here to criticize FBGs, but if you're going to criticize what SI did you need to do better than to say that historical anlayses and formulas only work in retrospect.
I'm going to set aside the data-snooping discussion, because the article is worthy of much criticism even without that.First, as has been mentioned, it's essentially plagiarized. But just from some internet site, so whatever I guess. They're SI; they can do that.

Second, they don't show us the full data, or even summarize it in any way.

Since 1998, these are some of the NFL quarterbacks who aced all three parts of the Rule of 26-27-60...
and here, by the way, are some of the others: Brady Quinn, Rex Grossman, Matt Leinart, Charlie Frye, and Kellen Clemens.
Meanwhile, among the once highly-touted prospects who failed at least one part of the formula...
Also among the same group are Tom Brady, Carson Palmer, and Aaron Rodgers.Here is a list of all quarterbacks who debuted in 1998 or later and have made a pro bowl, along with whether or not they meet all three criteria:

Derek Anderson: no

Tom Brady: no

Drew Brees: no

Marc Bulger: yes

Daunte Culpepper: no

Jay Cutler: no

Jake Delhomme: ??

Jeff Garcia: ??

David Garrard: no

Brian Griese: no

Matt Hasselbeck: no

Eli Manning: yes

Peyton Manning: yes

Donovan McNabb: no

Carson Palmer: no

Philip Rivers: yes

Aaron Rodgers: no

Ben Roethlisberger: no

Tony Romo: yes

Matt Schaub: yes

Michael Vick: no

Kurt Warner: ??

Vince Young: no

So that's 6 yes, 14 no, and 3 maybe.

Now how does that compare to some sort of control group, maybe all first- and second-round picks who have not made a pro bowl, or something like that? We don't know, because the article provides no context, just a couple of selectively-chosen lists.

It's like saying, "defensive players from the ACC are great. Just look at Derrick Brooks, Brian Dawkins, James Farrior, Julius Peppers, Trevor Pryce, Kris Jenkins, Keith Brooking, Greg Ellis, and Patrick Kerney. Meanwhile, these notable first-round busts did NOT go to ACC schools: Vernon Gholston, Johnathan Sullivan, DeWayne Robertson, Wendell Bryant, Philip Buchanon, Jimmy Kenneday, and Michael Haynes"

 
Last edited by a moderator:
guderian said:
I still prefer the LCF developed by Football Outsiders since their model adds the requirement that a QB be drafted in the first or second round. The SI study apparently seeks to replace the task of scouting whereas the LCF doesn't apply to QBs drafted in the 3rd round or later. IMO the LCF is more of a secondary filter beyond draft status that helps you determine if scouts had enough data to evaluate a QB or did a single team make a mistake in drafting a player off of a limited data set or measurables or whatnot. The LCF also doesn't have specific benchmarks, but is of a "more is better" nature.
Also, from what I understand, the LCF was developed by actual stats people with actual statistical backgrounds who did actual stat-like things such as backtesting for predictive power.With that said, I'm always, always leery of anything like that designed to predict inefficiencies in the NFL market. Not because I believe that inefficiencies don't exist- I'm sure there are dozens of them- but because I think the NFL is such a massive high-stakes business that as soon as an inefficiency is discovered, the market is going to correct for it pretty darn quickly. If Lewin really stumbled onto something when he discovered that NFL scouts were underrating completion percentage and number of starts, then I'm sure NFL scouting departments would have discussed it and corrected for it already, rendering the LCF useless going forward.
 
Is there any usefulness in using this rule to help prevent making a bad mistake? What I am wondering is does anyone have a list of which quarterbacks in say the last 5-10 have made this list? I am sure there are many who did not meet these requirements taht have been successful. If we have a complete list of all who have met these requirements I wonder what percentage of them have been solid NFL Qbs?

 
Jason Wood said:
JaxBill said:
Trent Edwards better watch outRyan Fitzpatrick 48 wonderlic 25 starts 59.9 comp pctBrian Brohm 32 wonderlic 33 starts 65 comp pct
I know you're being facetious, but I actually think Brohm could start in Buffalo and ultimately be a much better option than Trent or Fitzy. :confused:
You might be a better option than those two guys. With that said, Brohm would still be sitting at home after getting cut by GB if he wasnt a 2nd round pick a couple years ago. Him being the 3rd string in Buffalo is just going through the proper steps of what happens to a QB who was drafted way too high.
 
guderian said:
I still prefer the LCF developed by Football Outsiders since their model adds the requirement that a QB be drafted in the first or second round. The SI study apparently seeks to replace the task of scouting whereas the LCF doesn't apply to QBs drafted in the 3rd round or later. IMO the LCF is more of a secondary filter beyond draft status that helps you determine if scouts had enough data to evaluate a QB or did a single team make a mistake in drafting a player off of a limited data set or measurables or whatnot. The LCF also doesn't have specific benchmarks, but is of a "more is better" nature.
Also, from what I understand, the LCF was developed by actual stats people with actual statistical backgrounds who did actual stat-like things such as backtesting for predictive power.
It absolutely wasn't. That's been my issue with it from the beginning. Lewin was in college and admitted he didn't have much of a statistical background.
 
There is no difference between this study (or FO's study) that says 26-27-60=good and a study that says 40% of the time you roll a 3, a 6 follows. Want proof? Look at this set of data!

You can't formulate and test your hypothesis off of the same set of data. That's why it's bunk. And that's not what FBG does. If the 26-27-60 people had used one set of data to formulate their theory, and then tested that theory on another set of data, and found the results held, *then* it would be legitimate. But that's not what they did.
The Freshman stats article is nice, but when do you formulate your hypothesis using one set of data and test it on another? Article 1

You used one set of data to conclude "Based off some historical comparisons, it appears that a healthy Brees has a fantasy floor that's extremely high (top 10 in fantasy points per game) and a good chance to boast elite fantasy numbers (three players finished as QB1 in their 5th season)." I'm not criticizing your work, because your alternative would be to do something silly like come up with an estimated ranking based on the QBs that were in the AFC and to test it using QBs that played in the NFC. Unfortunately, if you did that you'd have a sample size in the single digits.

Article 2

You used one set of data to conclude "it stands to reason that Chicago should rank in the top five in pass attempts, and based on Cutler's skill level, in the top five in yards."

Those were just the first two examples that I looked at. In both cases you took a single set of data, crunched it and came to a conclusion. SI came up with 26/27/60 and you came up with Brees having a floor of top 10 and a good chance at "elite" that you didn't define and that Chicago should rank top 5 in pass attempts and Cutler would be top 5 in yards.

The stats theory is nice, but their 26/27/60 is similar to your top 10/elite and top 5/5.
Neither of those are examples of hypothesis testing, because there's no testing going on. That's why they're not similar. An example of hypotheses testing after data mining would be if I came out with a "study" that said after the Steelers win the Super Bowl, there's a 33% chance the Steelers will win the next SB, a 33% chance the Raiders will win the next SB, a 16% chance the Colts will win or a 16% chance the Saints will win it, and a 0% chance that anyone else will win. How do I know this? Because I looked at all the times this has ever happened, and this is what the data tells me.

There'd be zero predictive power to that study. And while there may be predictive power to the LCF (I doubt it, tho), if there is predictive power, it certainly hasn't been proven by anyone.

 
guderian said:
I still prefer the LCF developed by Football Outsiders since their model adds the requirement that a QB be drafted in the first or second round. The SI study apparently seeks to replace the task of scouting whereas the LCF doesn't apply to QBs drafted in the 3rd round or later. IMO the LCF is more of a secondary filter beyond draft status that helps you determine if scouts had enough data to evaluate a QB or did a single team make a mistake in drafting a player off of a limited data set or measurables or whatnot. The LCF also doesn't have specific benchmarks, but is of a "more is better" nature.
Also, from what I understand, the LCF was developed by actual stats people with actual statistical backgrounds who did actual stat-like things such as backtesting for predictive power.
It absolutely wasn't. That's been my issue with it from the beginning. Lewin was in college and admitted he didn't have much of a statistical background.
That was my impression of the LCF, too, until I read comment #79 by Bill Barnwell:
I just want to point out here that the LCF isn't an example of data dredging.

It's a pretty standard application of regression. The research question, in the case of the LCF, is "Are there any variables that predict professional success for quarterbacks?", with the null hypothesis being "There are no variables that predict professional success for quarterbacks outside of Jon Gruden's good face."

Next, a variety of data was gathered on a large number of college quarterbacks. This is where I think you're getting confused -- gathering a bunch of relevant data and using it to measure causation isn't data dredging. There's enough of a sample size that using part of the data (say, the first half of the timeframe for which QB data was available) to retrodict the performance of the players in the second half of the timeframe yields an effective model. (Since I didn't do the research in question, I don't have the exact numbers in front of me.)

Then, after eliminating those variables that don't have any predicative power, you're left with the final regression, containing statistically significant variables that have been tested against parts of the dataset not included in the regression. The LCF formula was enough to reject the null hypothesis.
Now, maybe Bill is lying, or maybe Bill is mistaken, but it certainly seems to me like he is very authoritatively and very definitively stating that LCF was vetted through standard statistical procedures to verify significance.
 
surely i can tell you tomorrow which stock you should have bought yesterday, but it doesnt help you at all to decide the next one you should buy, and it doesnt matter what the debt ratio, PE ratio, or market cap of the company is.

i think all of these stats are useless. the complexity of the human movement system and psychological/cognitive makeup as well as offensive/defensive systems, responsibilities, previous game situations, injuries, depth charts, etc. is such that i think players become proprietaries and not commodities. therefore, using OTHER players to determine how a specific player will perform seems off-base. certainly there are minimum thresholds of height, weight, speed, and agility, but otherwise i will continue to evaluate players mostly on how they perform and their situation/opportunity.

i dont care about 26 and 27. 60 would concern me. if you cant hit wrs that are open by 5 yards 2/3 of the time then how are you going to hit them when theyre open by 2 feet?

 
i dont care about 26 and 27. 60 would concern me. if you cant hit wrs that are open by 5 yards 2/3 of the time then how are you going to hit them when theyre open by 2 feet?
Even that has to be taken in context. Jay Cutler couldn't hit his WRs 60% of the time... but I doubt he ever saw a receiver at Vanderbilt who was open by 5 yards. He's done just fine hitting his receivers at the NFL level, though.
 
With that said, I'm always, always leery of anything like that designed to predict inefficiencies in the NFL market. Not because I believe that inefficiencies don't exist- I'm sure there are dozens of them- but because I think the NFL is such a massive high-stakes business that as soon as an inefficiency is discovered, the market is going to correct for it pretty darn quickly. If Lewin really stumbled onto something when he discovered that NFL scouts were underrating completion percentage and number of starts, then I'm sure NFL scouting departments would have discussed it and corrected for it already, rendering the LCF useless going forward.
This seems to be demonstrably false when it comes to coaching decisions (e.g., when to go for it on fourth down, which coaches keep getting wrong even years after persuasive papers had been published pointing out common errors). I wouldn't necessarily assume it to be true for scouting decisions, either.
 
Last edited by a moderator:
With that said, I'm always, always leery of anything like that designed to predict inefficiencies in the NFL market. Not because I believe that inefficiencies don't exist- I'm sure there are dozens of them- but because I think the NFL is such a massive high-stakes business that as soon as an inefficiency is discovered, the market is going to correct for it pretty darn quickly. If Lewin really stumbled onto something when he discovered that NFL scouts were underrating completion percentage and number of starts, then I'm sure NFL scouting departments would have discussed it and corrected for it already, rendering the LCF useless going forward.
This seems to be demonstrably false when it comes to coaching decisions (e.g., when to go for it on fourth down, which coaches keep getting wrong even years after persuasive papers had been published pointing out common errors). I wouldn't necessarily assume it to be true for scouting decisions, either.
There's a plausible explanation for why coaches would go with a sub-optimal strategy, though. Their continued employment could very well hinge on it. Just look at all the flak Belichick took when he made what was statistically the right play by going for it on 4th down against Indy. If that was Lovie Smith, he'd have been fired before the week was out. I can't think of a single plausible explanation as to why a scout would go with a sub-optimal strategy, though. It's not like a scout's prospects for continued employment are enhanced if he promotes more QBs who ultimately wind up busting. Dave Razzano notwithstanding.Those same statisticians and economists that keep finding that coaches should go for it on 4th down also keep finding that the NFL draft is a pretty efficient marketplace. And besides, I don't think those guys know nearly as much as they think they do (see: Massey-Thaler and their assertion that the team with the #1 draft pick should trade it for the #40 draft pick straight up).

It's like Billy Beane in Oakland. He identified several glaring inefficiencies in baseball (on-base percentage was underrated, guys who took balls were underrated, etc) and exploited them en route to posting the best record in MLB over a 5-year span. Then the rest of baseball caught on, and suddenly the As are just another mediocre small-market franchise that hasn't posted a winning season in 4 years. And, by all accounts, baseball is far more resistant to change and innovation than football is. Identifying an inefficiency is great, but when the stakes are that high, the market tends to correct it pretty quickly once it catches on.

 
"Data-snooping bias can occur when researchers either do not form a hypothesis in advance or narrow the data used to reduce the probability of the sample refuting a specific hypothesis." If you say so. I'm not here to criticize what FBGs does. Historical statistical analysis can only go so far in a sport with only 16 games providing a limited data set. Its a flaw that we have to live with or go play fantasy baseball. If you limit your data back to 1998 like they did, you open yourself to accusations that you didn't use enough data. If you extend your data back to the 70s and 80s, you come across as too naive to recognize the changes in the sport since then and appear to be a hostage to the data. What SI did was crank through data and come up with benchmarks, what you do is typically to say "how did players do historically that were in a similar, definable situation." One could make the argument that both analyses are only applicable in retrospect. Like I said, I'm not here to criticize FBGs, but if you're going to criticize what SI did you need to do better than to say that historical anlayses and formulas only work in retrospect.
I'm going to set aside the data-snooping discussion, because the article is worthy of much criticism even without that.First, as has been mentioned, it's essentially plagiarized. But just from some internet site, so whatever I guess. They're SI; they can do that.

Second, they don't show us the full data, or even summarize it in any way.

Since 1998, these are some of the NFL quarterbacks who aced all three parts of the Rule of 26-27-60...
and here, by the way, are some of the others: Brady Quinn, Rex Grossman, Matt Leinart, Charlie Frye, and Kellen Clemens.
Meanwhile, among the once highly-touted prospects who failed at least one part of the formula...
Also among the same group are Tom Brady, Carson Palmer, and Aaron Rodgers.Here is a list of all quarterbacks who debuted in 1998 or later and have made a pro bowl, along with whether or not they meet all three criteria:

Derek Anderson: no

Tom Brady: no

Drew Brees: no

Marc Bulger: yes

Daunte Culpepper: no

Jay Cutler: no

Jake Delhomme: ??

Jeff Garcia: ??

David Garrard: no

Brian Griese: no

Matt Hasselbeck: no

Eli Manning: yes

Peyton Manning: yes

Donovan McNabb: no

Carson Palmer: no

Philip Rivers: yes

Aaron Rodgers: no

Ben Roethlisberger: no

Tony Romo: yes

Matt Schaub: yes

Michael Vick: no

Kurt Warner: ??

Vince Young: no

So that's 6 yes, 14 no, and 3 maybe.

Now how does that compare to some sort of control group, maybe all first- and second-round picks who have not made a pro bowl, or something like that? We don't know, because the article provides no context, just a couple of selectively-chosen lists.

It's like saying, "defensive players from the ACC are great. Just look at Derrick Brooks, Brian Dawkins, James Farrior, Julius Peppers, Trevor Pryce, Kris Jenkins, Keith Brooking, Greg Ellis, and Patrick Kerney. Meanwhile, these notable first-round busts did NOT go to ACC schools: Vernon Gholston, Johnathan Sullivan, DeWayne Robertson, Wendell Bryant, Philip Buchanon, Jimmy Kenneday, and Michael Haynes"
Yet again, I'm not defending the article. I'm bristling at Chase's accusation that "the formula is only useful in retrospect" when their application of the data is very similar to a lot of what FBGs does--put together a list of data that meets some criteria, crank through the data and come up with a conclusion. When you do that your data is just as only useful in retrospect. I agree that they used selection bias.
 
Last edited by a moderator:
There is no difference between this study (or FO's study) that says 26-27-60=good and a study that says 40% of the time you roll a 3, a 6 follows. Want proof? Look at this set of data!

You can't formulate and test your hypothesis off of the same set of data. That's why it's bunk. And that's not what FBG does. If the 26-27-60 people had used one set of data to formulate their theory, and then tested that theory on another set of data, and found the results held, *then* it would be legitimate. But that's not what they did.
The Freshman stats article is nice, but when do you formulate your hypothesis using one set of data and test it on another? Article 1

You used one set of data to conclude "Based off some historical comparisons, it appears that a healthy Brees has a fantasy floor that's extremely high (top 10 in fantasy points per game) and a good chance to boast elite fantasy numbers (three players finished as QB1 in their 5th season)." I'm not criticizing your work, because your alternative would be to do something silly like come up with an estimated ranking based on the QBs that were in the AFC and to test it using QBs that played in the NFC. Unfortunately, if you did that you'd have a sample size in the single digits.

Article 2

You used one set of data to conclude "it stands to reason that Chicago should rank in the top five in pass attempts, and based on Cutler's skill level, in the top five in yards."

Those were just the first two examples that I looked at. In both cases you took a single set of data, crunched it and came to a conclusion. SI came up with 26/27/60 and you came up with Brees having a floor of top 10 and a good chance at "elite" that you didn't define and that Chicago should rank top 5 in pass attempts and Cutler would be top 5 in yards.

The stats theory is nice, but their 26/27/60 is similar to your top 10/elite and top 5/5.
Neither of those are examples of hypothesis testing, because there's no testing going on. That's why they're not similar. An example of hypotheses testing after data mining would be if I came out with a "study" that said after the Steelers win the Super Bowl, there's a 33% chance the Steelers will win the next SB, a 33% chance the Raiders will win the next SB, a 16% chance the Colts will win or a 16% chance the Saints will win it, and a 0% chance that anyone else will win. How do I know this? Because I looked at all the times this has ever happened, and this is what the data tells me.

There'd be zero predictive power to that study. And while there may be predictive power to the LCF (I doubt it, tho), if there is predictive power, it certainly hasn't been proven by anyone.
We're going to have to agree to disagree, however, I do agree that the SI article didn't employ proper statistical analysis.
 
Last edited by a moderator:
There's a plausible explanation for why coaches would go with a sub-optimal strategy, though. Their continued employment could very well hinge on it. Just look at all the flak Belichick took when he made what was statistically the right play by going for it on 4th down against Indy. If that was Lovie Smith, he'd have been fired before the week was out. I can't think of a single plausible explanation as to why a scout would go with a sub-optimal strategy, though.
Why wouldn't the same effect occur for scouts? Their recommendations to the front office are very visible and if they go around touting unorthodox players over 6'4" QBs with a laser arm their mistakes are going to be much more noticeable than your average scout. Their decisions are still subject to variance in a similar way that going for it on 4th and 2 is- except that there will probably be several years- not several games or several plays- before their more correct decisions will be paying off.
 
There is no difference between this study (or FO's study) that says 26-27-60=good and a study that says 40% of the time you roll a 3, a 6 follows. Want proof? Look at this set of data!

You can't formulate and test your hypothesis off of the same set of data. That's why it's bunk. And that's not what FBG does. If the 26-27-60 people had used one set of data to formulate their theory, and then tested that theory on another set of data, and found the results held, *then* it would be legitimate. But that's not what they did.
The Freshman stats article is nice, but when do you formulate your hypothesis using one set of data and test it on another? Article 1

You used one set of data to conclude "Based off some historical comparisons, it appears that a healthy Brees has a fantasy floor that's extremely high (top 10 in fantasy points per game) and a good chance to boast elite fantasy numbers (three players finished as QB1 in their 5th season)." I'm not criticizing your work, because your alternative would be to do something silly like come up with an estimated ranking based on the QBs that were in the AFC and to test it using QBs that played in the NFC. Unfortunately, if you did that you'd have a sample size in the single digits.

Article 2

You used one set of data to conclude "it stands to reason that Chicago should rank in the top five in pass attempts, and based on Cutler's skill level, in the top five in yards."

Those were just the first two examples that I looked at. In both cases you took a single set of data, crunched it and came to a conclusion. SI came up with 26/27/60 and you came up with Brees having a floor of top 10 and a good chance at "elite" that you didn't define and that Chicago should rank top 5 in pass attempts and Cutler would be top 5 in yards.

The stats theory is nice, but their 26/27/60 is similar to your top 10/elite and top 5/5.
Neither of those are examples of hypothesis testing, because there's no testing going on. That's why they're not similar. An example of hypotheses testing after data mining would be if I came out with a "study" that said after the Steelers win the Super Bowl, there's a 33% chance the Steelers will win the next SB, a 33% chance the Raiders will win the next SB, a 16% chance the Colts will win or a 16% chance the Saints will win it, and a 0% chance that anyone else will win. How do I know this? Because I looked at all the times this has ever happened, and this is what the data tells me.

There'd be zero predictive power to that study. And while there may be predictive power to the LCF (I doubt it, tho), if there is predictive power, it certainly hasn't been proven by anyone.
We're going to have to agree to disagree, however, I do agree that the SI article didn't employ proper statistical analysis.
I don't really understand the disconnect you are having here. The first article on Drew Brees linked above talks about just Drew Brees. It doesn't attempt to make any kind of general statement/proposition to apply to all QBs, or to all QBs who meet a specific criteria. That's the difference. Why is this hard to understand?
 
We're going to have to agree to disagree, however, I do agree that the SI article didn't employ proper statistical analysis.
I don't really understand the disconnect you are having here. The first article on Drew Brees linked above talks about just Drew Brees. It doesn't attempt to make any kind of general statement/proposition to apply to all QBs, or to all QBs who meet a specific criteria. That's the difference. Why is this hard to understand?
You're completely missing the point and I'm not the one that's keeping this subject alive--I said we'll have to agree to disagree. SI put together a single sample of data that they arbitrarily determined, crunched it and offered a projection. Chase put together a single sample of data using criteria that he arbitrarily determined, crunched it and offered a projection. He's using more mechanics, but he's doing the same thing that SI did. He's arguing that SI should use one set of data, come up with a hypothesis and test that hypothesis on a second set of data. OK, fine, I'd like to see that too, but why is he criticizing SI's study as only "useful in retrospect" when he's also coming to a conclusion using only a single set of data. The point here is that if you use a single set of historical data to come to a conclusion, that conclusion is only applicable to that set of data whether you use more rigorous mathematical mechanics or whether you're coming up for a rule to grade QBs or a floor/likely performance for a QB. Again, I'm not criticizing FBGs, its the nature of football and the limited sample sizes we have to work with. FBGs handles it their way, Football Outsiders attempts to overcome that by DVOA/DYAR metrics that look at individual plays, etc. We all live with it.
 
Last edited by a moderator:

Users who are viewing this thread

Back
Top