What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

332-95 (1 Viewer)

Interesting. A bit science denying (polls don't matter). A bit deflecting (nativist vs racist, and I don't support Trump but I defend him anyway). I could name a few posters that sound like him (does he claim to have voted for Gary Johnson? ;)  )

2020 will show if there are enough of the same ilk to keep Trump in power.
Not sure where you get not totally trusting polls as "science denying". If that's the new definition of science denier, that'll be interesting for sure. And no idea if he voted for Gary Johnson. And I don't think he has any interest in keeping Trump as President. 

 
Skoo said:
Because it's their job?

I just completely disagree with this idea that you should only impeach if you think it's politically advantageous. You do it because it's your sworn duty.
The only thing these people care about is getting re-elected 

 
Are you denying the science behind polls?

(I recommend everyone get thoroughly acquainted with the concept of "margin of error" when discussing polls)
Not at all. I'm saying I don't think someone who doesn't fully trust the polls is a "science denier". And yes, I've seen the "Educate yourself on margin of error - HTH ;)  " line. 

 
Not at all. I'm saying I don't think someone who doesn't fully trust the polls is a "science denier". And yes, I've seen the "Educate yourself on margin of error - HTH ;)  " line. 
So, not getting your point.

You agree that polls are scientifically based. You agree that polls have a margin of error. Yet, in your 

Apparently if you don't, you're a "science denier". :unsure:  
you seem to state that it is ok to disbelieve polls on a general principle (which is not described (care to?)).

So, yeah, I'm shuked here

 
[scooter] said:
Joe, the whole point is that Trump didn't only target Congresswomen who were born in other countries. He targeted Congresswomen who were born in America, including one (Pressley) whose ancestry is more American than Trump's.

If you want to compare just Ilhan Omar to a hypothetical white Congressperson born in Canada, that's fine. (But former Michigan governor Jennifer Granholm checks most of the boxes you brought up -- white, Democratic, born in Canada, now a US citizen, smart, feisty, extremely critical of Trump -- and not once has Trump told her to go back to Canada. Debbie_Mucarsel-Powell has light skin, is a Democrat Congressperson, was born in another country, is now a US citizen, is smart and feisty and extremely critical of Trump, and Trump hasn't told her to go back where she came from, either.)

Trump has had many opportunities to tell white critics to go back to another country. And yet he's never, ever, ever done it -- not when the critic wasn't born in the United States (Ted Cruz, John McCain, Jennifer Granholm, Debbie Muscarsel-Powell, etc.), and not when the critic was born in the United States.
If there was a 35 year old white democratic congresswoman born in Canada but now a US citizen and she was smart and feisty and popular and extremely critical of Trump, could you envision him telling her to go back to Canada and say they should fix their "failing" healthcare?
No, Trump would not tell a white citizen to go back to Canada. He might point out their Canadian birth and make fun of them but he would never think to tell them to go back.

 
Ok, the semantics game again. I see. 

So just admit the polling process is flawed
Its not a semantics game at all.  The actual polling was accurate.  Flawed was basing a probability on the national polling that wasn't reflective of the MOE in the state by state polls.

None of which really effects (at this point) polling that finds how opinions of people like independents and favorability and how things change week by week due to events.

 
[scooter] said:
Besides the President, you mean?
He's arguing they were not born in America?
Yes, Joe. It's right there in the first sentence of Trump's original tweet: "Democrat Congresswomen, who originally came from countries whose governments are a complete and total catastrophe".

He is saying that Ayanna Pressley and Alexandria Ocasio-Cortez came from another country, i.e. that they were not born in America.

 
Yes, Joe. It's right there in the first sentence of Trump's original tweet: "Democrat Congresswomen, who originally came from countries whose governments are a complete and total catastrophe".

He is saying that Ayanna Pressley and Alexandria Ocasio-Cortez came from another country, i.e. that they were not born in America.
Don't be so negative. Maybe he's coming to the realization of what he's done to our government.

 
No. Your question shows you need to bone up on how polls work (pay close attention to the concept of margin of error). HTH
I teach Statistics so I know a bit about the concept of margin of error in polling. I’d be edified if you would share with the board your understanding of that concept. Maybe a separate thread, even, unless you can give a reasonably concise answer. I find that a decent portion of our population has misconceptions about the more general notion of probabilities so I’m even more skeptical about folks understanding margin of error. It confounds my students often. But as a FBG, I suspect you actually do know what you are talking about. I’m on my phone or I’d do it myself. TIA.

 
I teach Statistics so I know a bit about the concept of margin of error in polling. I’d be edified if you would share with the board your understanding of that concept. Maybe a separate thread, even, unless you can give a reasonably concise answer. I find that a decent portion of our population has misconceptions about the more general notion of probabilities so I’m even more skeptical about folks understanding margin of error. It confounds my students often. But as a FBG, I suspect you actually do know what you are talking about. I’m on my phone or I’d do it myself. TIA.
:popcorn:

 
boots11234 said:
Democrats are going to lose MN this time too. Trump is a genius making the squad the face of your party. 
I am betting against Trump winning Minnesota. If he needs Minnesota he is in a world of hurt Republicans running for Senate got their clocks cleaned by Klobuchar and Smith in 2018. I actually thought Smith might not win but she won easily.

 
Trump's approval rating in Minnesota is down 7 points since he took office

Trump's "genius" is part of the reason why Ilhan Omar got elected to Congress in the first place.
Omar won her election by 56 percentage points in a safe seat district.  She replaced Keith Ellison who moved on to become Minnesota’s Attorney General.  Ellison is also black, Muslim, and radical.  There’s very good chance that Omar is elected with similar totals even if Trump never existed.

 
Omar won her election by 56 percentage points in a safe seat district.  She replaced Keith Ellison who moved on to become Minnesota’s Attorney General.  Ellison is also black, Muslim, and radical.  There’s very good chance that Omar is elected with similar totals even if Trump never existed.
I can agree with this. 

But there’s no chance Trump wins Minnesota. 

Every 4 years both sides have their pipe dreams. This time around Democrats think they can win Texas or Georgia. Republicans have their eyes on Minnesota, Nevada. This just ain’t happening. 

 
I can agree with this. 

But there’s no chance Trump wins Minnesota. 

Every 4 years both sides have their pipe dreams. This time around Democrats think they can win Texas or Georgia. Republicans have their eyes on Minnesota, Nevada. This just ain’t happening. 
I’m not saying any of those states will flip this election, but if I had to pick the one most likely it would be Georgia.

 
Joe Bryant said:
Not arguing that.

If there was a white democratic congresswoman born in Canada but now a US citizen and she was smart and feisty and popular and extremely critical of Trump, could you envision him telling her to go back to Canada and say they should fix their "failing" healthcare?
I think it's likely that some right wing pol or commentator will use this line on some liberal male congressman in order to provide cover but if that happens it's fair to mark that person as fundamentally dishonest. 

"A conservative is someone who stands athwart history, yelling Stop, at a time when no one is inclined to do so, or to have much patience with those who so urge it."   William F Buckley

White men dominated the public sphere of the USA since the colonial era.  There had to be a war to eliminate slavery and allow black men to be citizens. There was an incredible amount of hostility towards people who favored women getting the vote and then in the 1970's to "women's lib".

The support for white supremacy and patriarchy isn't what it was in the 50's and 60's but to pretend it's gone away is dishonest.  "The Squad"  is a symbol of how  that standard is fading away and IMO that's why Trump,  Fox News, talk radio and many conservatives attack these women. It's changing slowly and this won't become a society where the color of a persons skin is of no more signifigance than the color of their eyes won't happen any time soon. 

 
I teach Statistics so I know a bit about the concept of margin of error in polling. I’d be edified if you would share with the board your understanding of that concept. Maybe a separate thread, even, unless you can give a reasonably concise answer. I find that a decent portion of our population has misconceptions about the more general notion of probabilities so I’m even more skeptical about folks understanding margin of error. It confounds my students often. But as a FBG, I suspect you actually do know what you are talking about. I’m on my phone or I’d do it myself. TIA.
No worries. The specific discipline behind polling is Psephology - Wiki has this to say of the discipline 

Psephology is a division of political science that deals with the examination as well as the statistical analysis of elections and polls. People who practice psephology are called psephologists.

A few of the major tools that are used by a psephologist are historical precinct voting data, campaign finance information, and other related data. Public opinion polls also play an important role in psephology. Psephology also has various applications specifically in analysing the results of election returns for current indicators, as opposed to predictive purposes. For instance, the Gallagher Index measures the disproportionality of an election.
As for margin of error, I found this link (breaking it down into component error factors) to be interesting and concise

 
No worries. The specific discipline behind polling is Psephology - Wiki has this to say of the discipline 

As for margin of error, I found this link (breaking it down into component error factors) to be interesting and concise
Let's cut to the chase. I suspect that a significant proportion of voters misunderstand margin of error in believing that it gives a range of percents that the "true" number could be. For instance, if a well-designed randomized survey claimed that Candidate #1 is polling at 46% with a margin of error of +/- 3%, then that means that the true percentage of votes Candidate #1 receives is between 43% and 49%. And if Candidate #2 came in at 54% with the same margin of error, so producing an interval of 51% to 57%, then that poll is saying that Candidate #2 "should" win or "will" win or what-have-you. But none of that is right. These polls with their margin of errors are just talking about probabilities, and most of the reported margin of errors is quantifying the variation inherent in conducting a random sample. Other types of error (like non-response bias or people not being honest) is quite difficult to quantify so I suspect most pollsters ignore it and simply follow the rule of thumb that a random sample of about 1000 produces a margin of error of about 3%.

It's all too nuanced to summarize in one post (I have not told you what the + / - 3% margin of error means, only what it doesn't mean), but my suspicion is that many, many people misunderstand polls and margin of error. From your response, I cannot tell if you do or not since you just quoted another site. I'll keep going if there is interest but I've probably passed the tl;dr threshold.

 
Any science focused on predictions is always going to be questioned/challenged as a flawed process.

Meteorology seems very similar to polling.  Anytime a weatherman predicts a 90% chance of rain and it is sunny out all day, people lose their minds at how bad they are at their jobs.  Yet, they were simply saying that based on statistical models, there would be rain 9 out of 10 times.

 
Any science focused on predictions is always going to be questioned/challenged as a flawed process.

Meteorology seems very similar to polling.  Anytime a weatherman predicts a 90% chance of rain and it is sunny out all day, people lose their minds at how bad they are at their jobs.  Yet, they were simply saying that based on statistical models, there would be rain 9 out of 10 times.
Meteorology has absolutely no similarity to statistical polling.

 
Let's cut to the chase. I suspect that a significant proportion of voters misunderstand margin of error in believing that it gives a range of percents that the "true" number could be. For instance, if a well-designed randomized survey claimed that Candidate #1 is polling at 46% with a margin of error of +/- 3%, then that means that the true percentage of votes Candidate #1 receives is between 43% and 49%. And if Candidate #2 came in at 54% with the same margin of error, so producing an interval of 51% to 57%, then that poll is saying that Candidate #2 "should" win or "will" win or what-have-you. But none of that is right. These polls with their margin of errors are just talking about probabilities, and most of the reported margin of errors is quantifying the variation inherent in conducting a random sample. Other types of error (like non-response bias or people not being honest) is quite difficult to quantify so I suspect most pollsters ignore it and simply follow the rule of thumb that a random sample of about 1000 produces a margin of error of about 3%.

It's all too nuanced to summarize in one post (I have not told you what the + / - 3% margin of error means, only what it doesn't mean), but my suspicion is that many, many people misunderstand polls and margin of error. From your response, I cannot tell if you do or not since you just quoted another site. I'll keep going if there is interest but I've probably passed the tl;dr threshold.


I'm interested. Please go on.

 
I'm interested. Please go on.
Yeah @pecorino, I would like to hear more too. My perception of when I see poll results was that the result would fall inside that range(at least 95% of the time). So if 54% was predicted, then  95 out of 100 times it would fall between 51 and 57 with the greatest concentration being at 54%.

 
Any science focused on predictions is always going to be questioned/challenged as a flawed process.

Meteorology seems very similar to polling.  Anytime a weatherman predicts a 90% chance of rain and it is sunny out all day, people lose their minds at how bad they are at their jobs.  Yet, they were simply saying that based on statistical models, there would be rain 9 out of 10 times.
Not exactly.   They are saying that within the area of discussion, 90% will receive rain.  

 
Yeah @pecorino, I would like to hear more too. My perception of when I see poll results was that the result would fall inside that range(at least 95% of the time). So if 54% was predicted, then  95 out of 100 times it would fall between 51 and 57 with the greatest concentration being at 54%.
This is very close to correct but there are some semantic issues with what you wrote. But if the general public had this idea of polls and margin of error, I'd be satisfied.

 
What I see on this board, more than anywhere else..is an absurd reliance on polls. To the point where the posters, one in particular, have lost all ability to think for themselves, and can only point to polling data to help that poster think.  It's an obsession and not at all healthy.

 
Last edited by a moderator:
What I see on this board, more than anywhere else..is an absurd reliance on polls. To the point where the posters, one in particular, have lost all ability to think for themselves, and can only point to polling data to help that poster think.  It's an obsession and not at all healthy.
I don't think any one person here relies on polls to the extent of not thinking for themselves.

I think the obsession of not listening at all to any polling is also not all that healthy...or just dismissing them all as meaningless.

There is a place for polling and using the data...it can be quite useful.

 
I don't think any one person here relies on polls to the extent of not thinking for themselves.

I think the obsession of not listening at all to any polling is also not all that healthy...or just dismissing them all as meaningless.

There is a place for polling and using the data...it can be quite useful.
Your comment is noted.

 
About polls and surveys:

1) Online polls and/or any survey in which people may choose to respond by their own choice are bunk and should be either tossed out altogether or consumed merely for their entertainment value. Think: a Presidential poll on November 7th, 2016 in which online visitors to CNN's website could click who they're voting for -- versus -- the same poll at the Fox website. Might be fun to look at but I wouldn't make any predictions based on that data.

2) One major goal of Statistics in general and polling in particular is to try to capture a single value (often a percent) about a very large population at one snapshot of time. For instance, to take this out of the realm of politics: Among Americans over 18 (a population of hundreds of millions of people), what percent of them would favor abolishing the penny?  I envision this percentage as an unknowable number, that true percentage among all of those folks. If you believe in God, you'd probably say that He'd know that percentage even though it is constantly changing as people leave the population by dying or enter the population by turning 18 or by emigrating here (that's another thread).

That magical, elusive percentage is what a statistician would like to know and it is called a "parameter."  If every single member of that population could be asked, then we should be able to get a good handle on that parameter. But that's a very time consuming and costly proposition. So much so that we only do this kind of statistics rarely and it is called a census. For the sake of illustration, let's suppose that we know the true parameter (even though statisticians almost never do) and that the true percentage of our population who want to abolish the penny is 44%.

Instead of attempting a census on a very large population, statisticians realized that one could get a reasonable approximation for this parameter by taking a random sample of the population and using that percentage as the best approximation. Randomness is key. If you sample people who are hanging around the mall or a church or an NFL game, you might get skewed results if the members of that sample are not representative of the whole population Heaven forbid you sample attendees at a coin collecting convention. So the statistician sets out to conduct a random sample.

3) An key fact about sampling is that, even for a population in the hundreds of millions, one only needs to sample a relatively small fraction of them to be able to get a decent approximation of the parameter. Think of it like ocean water. Just because the ocean is enormous doesn't mean that we'd need a lot of water to take a reasonably representative sample. It just needs to be mixed well and we need to choose a sample at random not somewhere convenient, like right off a pier.

Turns out that a sample of only about 1000 subjects is enough for most situations so if you look closely at most polls, they will say something like "1023 people were surveyed." This is rather amazing: a well-conducted random sample of only 1000 people will give a reasonably accurate estimate for the true percentage of all Americans who favor or oppose a proposition. Such a sample yields a "margin of error" of about +/- 3%, roughly.

4) Conducting a random sample of 1000 Americans over the age of 18, though, is a royal PIA. You cannot very well put all those names in a hat and pull out 1000 of them. I won't get into the gory details but good samples tend to break this up into stages and do a stratified random sample. Suffice to say, it is easier to cut corners and make a specious sample than it is to do it well. This is why I trust the big pollsters like Gallup, Roper, Quinnipiac, etc. because they have the funds and expertise to conduct this random sample. Now, as you must suspect, it is possible that if you only sample 1000 people, we could get extraordinarily unlucky and just happen to select all penny-lovers in our sample even though, in truth, 44% of our population want to abolish it. This can happen but you can also hit the Powerball on three consecutive weeks. Sure it can happen but it's very, very, very unlikely. The techniques of statistics allow us to quantify just how likely it is that our random sample will be very far off from what one would expect after doing a random sample. The percentage of people in our sample who want to abolish is called a "statistic" and let's for the sake of argument assume that it came out as 40%

5) That statistic (the 40%) would vary from sample to sample. If we redid the sample of 1000 again, we might get 45%. And again and get 41%. But it varies according to a pattern which is very well understood. Imagine if you sampled over and over again (like millions of times), those percentages would dance around the true parameter percentage of 44% with some hitting right on the money and some being pretty far away (maybe as far away as 30% or 60% but very, very unlikely that it would be much further from 44% unless our sample was tainted). Graphing all of those different percentages would reveal a bell curve (a normal distribution) with the peak at 44% and with it trailing in to the rare tails down towards 30% on the left and 60% on the right.

6) Here is the bummer: we usually only have the time and energy to do one sample. So let's assume we got 40% for our sample statistic. Remember that the parameter was 44% but we need to pretend like we didn't know this because statisticians never know this "true" number.  So we really need to rely on that 40% as our best estimate. If someone put a gun to your head and said "Predict the true parameter" you should guess 40% because that's what the sample said. But we would not have much confidence in this result because of the sampling variability mentioned above. I'd feel much better if I could say "I think the true parameter is pretty close to 40%". In fact, if I were to give that +/- 3% wiggle room, I would report that I'm pretty confident that the parameter is somewhere between 37% and 43%. That range gives us 95% confidence that we've captured the parameter in that interval.

7) Whoops, the parameter is actually not in that interval. We got unlucky, and that happens about 5% of the time. We do a perfectly random sample, we get an estimation and give ourselves the 3% wiggle room and still we wiffed. It happens about 5% of the time. But you never know when it is going to happen--we do not know what the parameter is, remember. So we claim it is in that interval but we cannot be certain. This, by the way, is a central difference between mathematics as statistics. Mathematicians are certain (they prove things) while Statisticians wrestle with probabilities and can tell you when something is likely to be true or false.

The conclusion is: look for a random sample of at least 1000, go ahead and do your +/- margin of error, but do not assume that the "true percentage" lies in that range. We just don't know in any given sample, although our confidence grows with more samples or with larger ones.

 
I don't think any one person here relies on polls to the extent of not thinking for themselves.

I think the obsession of not listening at all to any polling is also not all that healthy...or just dismissing them all as meaningless.

There is a place for polling and using the data...it can be quite useful.


I think people also confuse polls with the pundits who misinterpret those polls.

 
About polls and surveys:

1) Online polls and/or any survey in which people may choose to respond by their own choice are bunk and should be either tossed out altogether or consumed merely for their entertainment value. Think: a Presidential poll on November 7th, 2016 in which online visitors to CNN's website could click who they're voting for -- versus -- the same poll at the Fox website. Might be fun to look at but I wouldn't make any predictions based on that data.

2) One major goal of Statistics in general and polling in particular is to try to capture a single value (often a percent) about a very large population at one snapshot of time. For instance, to take this out of the realm of politics: Among Americans over 18 (a population of hundreds of millions of people), what percent of them would favor abolishing the penny?  I envision this percentage as an unknowable number, that true percentage among all of those folks. If you believe in God, you'd probably say that He'd know that percentage even though it is constantly changing as people leave the population by dying or enter the population by turning 18 or by emigrating here (that's another thread).

That magical, elusive percentage is what a statistician would like to know and it is called a "parameter."  If every single member of that population could be asked, then we should be able to get a good handle on that parameter. But that's a very time consuming and costly proposition. So much so that we only do this kind of statistics rarely and it is called a census. For the sake of illustration, let's suppose that we know the true parameter (even though statisticians almost never do) and that the true percentage of our population who want to abolish the penny is 44%.

Instead of attempting a census on a very large population, statisticians realized that one could get a reasonable approximation for this parameter by taking a random sample of the population and using that percentage as the best approximation. Randomness is key. If you sample people who are hanging around the mall or a church or an NFL game, you might get skewed results if the members of that sample are not representative of the whole population Heaven forbid you sample attendees at a coin collecting convention. So the statistician sets out to conduct a random sample.

3) An key fact about sampling is that, even for a population in the hundreds of millions, one only needs to sample a relatively small fraction of them to be able to get a decent approximation of the parameter. Think of it like ocean water. Just because the ocean is enormous doesn't mean that we'd need a lot of water to take a reasonably representative sample. It just needs to be mixed well and we need to choose a sample at random not somewhere convenient, like right off a pier.

Turns out that a sample of only about 1000 subjects is enough for most situations so if you look closely at most polls, they will say something like "1023 people were surveyed." This is rather amazing: a well-conducted random sample of only 1000 people will give a reasonably accurate estimate for the true percentage of all Americans who favor or oppose a proposition. Such a sample yields a "margin of error" of about +/- 3%, roughly.

4) Conducting a random sample of 1000 Americans over the age of 18, though, is a royal PIA. You cannot very well put all those names in a hat and pull out 1000 of them. I won't get into the gory details but good samples tend to break this up into stages and do a stratified random sample. Suffice to say, it is easier to cut corners and make a specious sample than it is to do it well. This is why I trust the big pollsters like Gallup, Roper, Quinnipiac, etc. because they have the funds and expertise to conduct this random sample. Now, as you must suspect, it is possible that if you only sample 1000 people, we could get extraordinarily unlucky and just happen to select all penny-lovers in our sample even though, in truth, 44% of our population want to abolish it. This can happen but you can also hit the Powerball on three consecutive weeks. Sure it can happen but it's very, very, very unlikely. The techniques of statistics allow us to quantify just how likely it is that our random sample will be very far off from what one would expect after doing a random sample. The percentage of people in our sample who want to abolish is called a "statistic" and let's for the sake of argument assume that it came out as 40%

5) That statistic (the 40%) would vary from sample to sample. If we redid the sample of 1000 again, we might get 45%. And again and get 41%. But it varies according to a pattern which is very well understood. Imagine if you sampled over and over again (like millions of times), those percentages would dance around the true parameter percentage of 44% with some hitting right on the money and some being pretty far away (maybe as far away as 30% or 60% but very, very unlikely that it would be much further from 44% unless our sample was tainted). Graphing all of those different percentages would reveal a bell curve (a normal distribution) with the peak at 44% and with it trailing in to the rare tails down towards 30% on the left and 60% on the right.

6) Here is the bummer: we usually only have the time and energy to do one sample. So let's assume we got 40% for our sample statistic. Remember that the parameter was 44% but we need to pretend like we didn't know this because statisticians never know this "true" number.  So we really need to rely on that 40% as our best estimate. If someone put a gun to your head and said "Predict the true parameter" you should guess 40% because that's what the sample said. But we would not have much confidence in this result because of the sampling variability mentioned above. I'd feel much better if I could say "I think the true parameter is pretty close to 40%". In fact, if I were to give that +/- 3% wiggle room, I would report that I'm pretty confident that the parameter is somewhere between 37% and 43%. That range gives us 95% confidence that we've captured the parameter in that interval.

7) Whoops, the parameter is actually not in that interval. We got unlucky, and that happens about 5% of the time. We do a perfectly random sample, we get an estimation and give ourselves the 3% wiggle room and still we wiffed. It happens about 5% of the time. But you never know when it is going to happen--we do not know what the parameter is, remember. So we claim it is in that interval but we cannot be certain. This, by the way, is a central difference between mathematics as statistics. Mathematicians are certain (they prove things) while Statisticians wrestle with probabilities and can tell you when something is likely to be true or false.

The conclusion is: look for a random sample of at least 1000, go ahead and do your +/- margin of error, but do not assume that the "true percentage" lies in that range. We just don't know in any given sample, although our confidence grows with more samples or with larger ones.
Great info...also always interesting is the amount of people they actually need to contact to even get to the 1000 or so actually polled (can't recall the percentage that won't pick up or even respond...but IIRC its a pretty low response rate).

 
Great info...also always interesting is the amount of people they actually need to contact to even get to the 1000 or so actually polled (can't recall the percentage that won't pick up or even respond...but IIRC its a pretty low response rate).
Good point. I did not even touch on the non-response rate or the other errors that can occur outside of random variation. My take on the 2016 polls is that 1) most were leaning or pretty clearly pointing towards Hillary but 2) they were far from slam dunks as those confidence intervals included scenarios in which Trump comes out the winner, and 3) most apropos to this conversation, I believe we hit a new level where subjects either did not tell the truth that they were voting for Trump or that they waffled at high rates so when it came to actually voting, they swung to Trump. Plus you've got the Comey debacle which hit in October which also threw a wrench into predictions.

 
Good point. I did not even touch on the non-response rate or the other errors that can occur outside of random variation. My take on the 2016 polls is that 1) most were leaning or pretty clearly pointing towards Hillary but 2) they were far from slam dunks as those confidence intervals included scenarios in which Trump comes out the winner, and 3) most apropos to this conversation, I believe we hit a new level where subjects either did not tell the truth that they were voting for Trump or that they waffled at high rates so when it came to actually voting, they swung to Trump. Plus you've got the Comey debacle which hit in October which also threw a wrench into predictions.


To this point, I was listening to the 538 podcast and they had a pollster from Pennsylvania on this week. He said at the 2016 election - for whatever reason - they did not re-poll inside of 10 days from the election. As we know, for reasons including Comey, late undecideds likely broke to Trump. 

 

Users who are viewing this thread

Top