What's new
Fantasy Football - Footballguys Forums

Welcome to Our Forums. Once you've registered and logged in, you're primed to talk football, among other topics, with the sharpest and most experienced fantasy players on the internet.

Math Puzzles (from FiveThirtyEight - new puzzle every Friday) (1 Viewer)

Radius of inner circle is 5/4.5 = 1.11m.  Perpendicular distance to shore from the edge of this circle is 4.875m.  The distance around the perimeter to this point (the long way around) is 22.44m, but the dog can only travel 4.5*4.875 = 21.94m in that amount of time.  

The dog cannot backtrack.  Once he does, he'll get to a point where the center of the pond is directly between him and the duck again, which is the position we were just in before the dog started running.  So we're starting over, except now the duck is even closer to the shore (i.e. the inner circle is even larger) than it was before.  Clearly this is a losing strategy for the dog as well.  
I see what you're saying, but have no idea how to solve for it.  If the dog does in fact backtrack, the duck could just make another 90 degree turn, this time "north" and put the dog in another quandary.  If the dog again backtracks, the duck can do it again....and eventually get to the shore with the dog still somewhere on the far side.  Maybe I'm changing my answer....

 
There’s an airplane with 100 seats, and there are 100 ticketed passengers each with an assigned seat. They line up to board in some random order. However, the first person to board is the worst person alive, and just sits in a random seat, without even looking at his boarding pass. Each subsequent passenger sits in his or her own assigned seat if it’s empty, but sits in a random open seat if the assigned seat is occupied. What is the probabilitythat you, the hundredth passenger to board, finds your seat unoccupied?
My initial thought is that it seems like there's a pretty simple, non-mathy way to see what the answer is.  Not 100% sure if my answer is correct though.  I'll work on it a little later this morning to see if I can prove it, but won't post it for now so others can discuss their ideas freely.  

 
Pardon my French, but their answer to last week (rather it's explanation) sucks.  Seems to me they just tell you it's 4.6 and leave it at that.  Even the little video (the first one) has the duck swimming in his little circle, then making a beeline for the shore closest to him (which was my example which gave me pi +1), not your example where the duck would head on a tangent to the shore (is that the way to describe his path?) which ended up being 4.6.

ETA - Ok, clicked on the link to the real math of the problem.  Way beyond me.  At least I didn't say pi.....onto this week's puzzle.

 
Last edited by a moderator:
My initial thought is that it seems like there's a pretty simple, non-mathy way to see what the answer is.  Not 100% sure if my answer is correct though.  I'll work on it a little later this morning to see if I can prove it, but won't post it for now so others can discuss their ideas freely.  
I’m with you on that one... To ensure the calculations were ok, I’ve looked at a two places plane, three places plane (same answer) and when looking at all the four places scenarios – an easy conclusion is drawn for any size plane.

 
7/10 :bag:   But I did them all quickly in my head because I was impatient.  If I'd actually used the 24 minutes allotted or whatever I definitely would've gotten at least 9, probably 10 of them.  I'm the opposite of you I think, I'd definitely have gotten 8 but might've still gotten 10 wrong just by being careless.  
Bah, I got 10 wrong because I forgot to double the number of triangles in the half a hexagon I drew out :bag:

 
I’m with you on that one... To ensure the calculations were ok, I’ve looked at a two places plane, three places plane (same answer) and when looking at all the four places scenarios – an easy conclusion is drawn for any size plane.
Hmm, I went in all mathy and I'm stuck on the third guy.  Maybe I'm missing something.

First guy gets on the plane and grabs a seat.  1/100 odds he grabs his own - then everyone else falls in line and you get your set.  But 99/100 he grabs another person's.  Those are all 100 scenarios.

Second guy gets on.  In the 1/100 scenario of the first guy, 2nd guy's seat is empty and he's happy and so is everyone else behind him.  In the other 99/100 situations, there is a 1/99 shot that the 1st guy picked the 2nd guy's seat - which itself leaves 2 possible scenarios; 2nd guy has a 1/99 shot of sitting down in the first guy's seat and everyone else behind them is happy, or a 98/99 chance he picks some new's chair.  And then there is a 98/99 shot in the 99/100 chance that this guy's chair is empty and he sits down in it. 

 
Hmm, I went in all mathy and I'm stuck on the third guy.  Maybe I'm missing something.

First guy gets on the plane and grabs a seat.  1/100 odds he grabs his own - then everyone else falls in line and you get your set.  But 99/100 he grabs another person's.  Those are all 100 scenarios.

Second guy gets on.  In the 1/100 scenario of the first guy, 2nd guy's seat is empty and he's happy and so is everyone else behind him.  In the other 99/100 situations, there is a 1/99 shot that the 1st guy picked the 2nd guy's seat - which itself leaves 2 possible scenarios; 2nd guy has a 1/99 shot of sitting down in the first guy's seat and everyone else behind them is happy, or a 98/99 chance he picks some new's chair.  And then there is a 98/99 shot in the 99/100 chance that this guy's chair is empty and he sits down in it. 
Not sure if this helps, but by choosing a random seat, the first passenger is opening a "loop" so to speak.  You'll get your seat if and only if that loop gets closed before someone takes your seat.  

 
Not sure if this helps, but by choosing a random seat, the first passenger is opening a "loop" so to speak.  You'll get your seat if and only if that loop gets closed before someone takes your seat.  
Right, but he's got a 1% chance of that loop never opening if he picks his own seat.  That first guy also has a 1% chance of picking your seat from the start - a situation which leaves only 1 unhappy passenger, you. 

So the 2nd guy gets on, he's got a 99% chance of that loop being open - and it's really only FULLY closed if his seat was taken by the first guy (1/99 in the 99/100 chance that the first guy didn't pick his own seat) AND the 2nd guy randomly picks the seat of the 1st guy (1/99 in the 99/100 chance that the first guy didn't pick his own seat).  In every other situation, a passenger between #3 and #100 will have their seat occupied when it's their turn.

 
Last edited by a moderator:
This was my solution (no math, but doing it long-hand) for a 5 person plane:

I looked at the potential outcomes:

seat chosen in order:

20% chance person 1 choses his own seat: 1, 2, 3, 4, 5

20% chance he choses seat 2 (2.5% chance of any of these):

2, 1, 3, 4, 5

2, 3, 1, 4, 5

2, 3, 4, 1, 5

2, 3, 4, 5, 1

2, 3, 5, 4, 1

2, 4, 3, 1, 5

2, 4, 3, 5, 1

2, 5, 3, 4, 1

20% chance he choses seat 3 (5% chance of the following)

3, 2, 1, 4, 5

3, 2, 4, 1, 5

3, 2, 4, 5, 1

3, 2, 5, 4, 1

20% chance of seat 4 (10%)

4, 2, 3, 1, 5

4, 2, 3, 5, 1

20% chance of seat 5

5, 2, 3, 4, 1

Add up all the probabilities that it ends up in seat 5 and you get 50%

It worked for 4, so I assume - dangerous - that it would extrapolate out to 100 seats.  It helps that a passenger always takes their seat if its available, rather than 99 random choices)
 
Right, but he's got a 1% chance of that loop never opening if he picks his own seat.  That first guy also has a 1% chance of picking your seat from the start - a situation which leaves only 1 unhappy passenger, you. 

So the 2nd guy gets on, he's got a 99% chance of that loop being open - and it's really only FULLY closed if his seat was taken by the first guy (1/99 in the 99/100 chance that the first guy didn't pick his own seat) AND the 2nd guy randomly picks the seat of the 1st guy (1/99 in the 99/100 chance that the first guy didn't pick his own seat).  In every other situation, a passenger between #3 and #100 will have their seat occupied when it's their turn.
You're correct that right off the bat, there's a 1% chance that the first passenger immediately closes the loop (in which case you get your seat) and also a 1% chance that he takes your seat (in which case you don't).  The other 98% of the time he just kicks the can down the road to another passenger.  Let's assume that last one is what happens.

When we get to that passenger (whoever it is), there's a 1/(however many seats are left) chance that they'll close the loop, a 1/(however many seats are left) chance that they'll take your seat, and the rest of the time they'll kick it further down the road to someone else.  Let's assume that happens again.

When we get to that passenger (whoever it is), there's a 1/(however many seats are left) chance that they'll close the loop, a 1/(however many seats are left) chance that they'll take your seat, and the rest of the time they'll kick it further down the road to someone else. 

Note that most passengers are going to get their correct seats.  It's only the ones that get roped into this loop that we're concerned with.  And at every step in this process, there's an equal chance that they'll close the loop (and thus you end up with your seat) or that they'll take your seat (in which case you don't).  That is, at every step there are three outcomes: you win, you lose, or the game keeps going.  And at every step, the probability of "you win" = the probability of "you lose".  So it's 50%. 

 
Easiest explanation I can come up with:

Convince yourself that for a 2-seats and 3-seats plane, looking at all scenarios, that you have a 50% chance of finding your seat open.

Now, for the 100-seats (or n-seats) plane... suppose the passengers are numbered 1 to 100 boarding the plane. Imagine that guy #1 sits randomly at the seat of passenger #27 – then passengers #2 to #26 sits at their respective spot – and #27 boarding finds someone at his spot and sits randomly... convince yourself that this is the same problem as a 74-seats plane... and you’ll see that you have a 50% chance of finding your seat open.

Guy #1 sitting in spot #27 has just postponed the exact same problem (if #27 sits in spot #93 – then passengers #28 through #92 sits at their spot – and it becomes a 8-seats problem).

 
Last edited by a moderator:
You're correct that right off the bat, there's a 1% chance that the first passenger immediately closes the loop (in which case you get your seat) and also a 1% chance that he takes your seat (in which case you don't).  The other 98% of the time he just kicks the can down the road to another passenger.  Let's assume that last one is what happens.

When we get to that passenger (whoever it is), there's a 1/(however many seats are left) chance that they'll close the loop, a 1/(however many seats are left) chance that they'll take your seat, and the rest of the time they'll kick it further down the road to someone else.  Let's assume that happens again.

When we get to that passenger (whoever it is), there's a 1/(however many seats are left) chance that they'll close the loop, a 1/(however many seats are left) chance that they'll take your seat, and the rest of the time they'll kick it further down the road to someone else. 

Note that most passengers are going to get their correct seats.  It's only the ones that get roped into this loop that we're concerned with.  And at every step in this process, there's an equal chance that they'll close the loop (and thus you end up with your seat) or that they'll take your seat (in which case you don't).  That is, at every step there are three outcomes: you win, you lose, or the game keeps going.  And at every step, the probability of "you win" = the probability of "you lose".  So it's 50%. 
Ok, that's spot on.  The one thing I didn't see right off the bat is that there would never be "two cans being kicked".  Meaning, if the first guy picked the 3rd guys seat and then the 2nd guy gets on and picks the 4th guys.  That wouldn't happen as the the 2nd guy would simply take his own seat.  At any given time there can only be a max of 1 person being displaced.  Once I saw that light, your explanation makes perfect sense.

 
I did my code-based simulation and also arrived at 50%; incidentally, over a large number of trials, just under 95% of passengers end up in their correct seats.

I also ran some additional tests, increasing the number of people who pick a random seat (all of these as$ho!es go first for simplicity's sake). 

  • The % of passengers ending up in their own seat decreases a bit less than linearly to around 67.5% with 10 random-sitters, and 48.4% with 20.
  • The probability of the last passenger getting his own seat drops much more quickly, down to 20% with just 4 random-sitters, but starts to flatten out, showing 9.3% with 10 random, and 5.6% with 20 random.
 
  • The probability of the last passenger getting his own seat drops much more quickly, down to 20% with just 4 random-sitters, but starts to flatten out, showing 9.3% with 10 random, and 5.6% with 20 random.
My wild guess is that the probability of the last guy getting his own spot is 1/(n+1) with n random sitters (i.e. number of 'important seats', all other sitting at their respective spot).

 
My wild guess is that the probability of the last guy getting his own spot is 1/(n+1) with n random sitters (i.e. number of 'important seats', all other sitting at their respective spot).
We assuming all the random sitters are the first n number of people to board?

 
We assuming all the random sitters are the first n number of people to board?
For the sake of simplicity, yes - but I'm not sure that changes the probability outcome if they're the n first to board or placed randomly in the queue.

 
I did my code-based simulation and also arrived at 50%; incidentally, over a large number of trials, just under 95% of passengers end up in their correct seats.

I also ran some additional tests, increasing the number of people who pick a random seat (all of these as$ho!es go first for simplicity's sake). 

  • The % of passengers ending up in their own seat decreases a bit less than linearly to around 67.5% with 10 random-sitters, and 48.4% with 20.
  • The probability of the last passenger getting his own seat drops much more quickly, down to 20% with just 4 random-sitters, but starts to flatten out, showing 9.3% with 10 random, and 5.6% with 20 random.
Played around a bit and assigned the a-holes randomly among the passengers

  • One random sitter still ends up in the neighborhood of 50% for the last guy, and 95% overall
  • With 4 random sitters, last guy gets 20%, and overall, 87%
  • With 10 random sitters, last guy gets 9.6%, and overall, 76%
  • With 20 random sitters, last guy gets 5.6% and overall, 63%
So the probability for the last passenger doesn't seem to change much (if at all) whether the random-sitter(s) is (are) first in line or scattered at random.  But the random approach seems to help get more of the other passengers in their own seats.

 
First one is simple cyptogram, I'm not sure the others are.

I'm pretty sure that second one isn't as the 4th "word" is 4 letters long and starts with a double.  I'm not sure of many 4 letter words that start with a double letter.  Maybe each word is also backwards?

 
Last edited by a moderator:
How about a clue on the second?  Am I correct in that it's not simply a "letter for letter swap" like the first one?  There has to be some other step in the process.
Correct. The first and fourth puzzles are simple substitution ciphers.  The second one is a polyalphabetic cipher.  The third one is a digram cipher.  

 
New puzzle, and solution to last week's.

Two players go on a hot new game show called “Higher Number Wins.” The two go into separate booths, and each presses a button, and a random number between zero and one appears on a screen. (At this point, neither knows the other’s number, but they do know the numbers are chosen from a standard uniform distribution.) They can choose to keep that first number, or to press the button again to discard the first number and get a second random number, which they must keep. Then, they come out of their booths and see the final number for each player on the wall. The lavish grand prize — a case full of gold bullion — is awarded to the player who kept the higher number. Which number is the optimal cutoff for players to discard their first number and choose another? Put another way, within which range should they choose to keep the first number, and within which range should they reject it and try their luck with a second number?

 
Last edited by a moderator:
This one seems too easy.  I like when puzzles seem too easy but end up having a surprising solution, but I have a bad feeling this one is as easy as it seems.  

 
I agree, seems obvious.  Could it have something to do with trying to determine what the other guy does?

 
I agree, seems obvious.  Could it have something to do with trying to determine what the other guy does?
Yeah, I thought of that and at first concluded that it didn't matter.  But now I'm having second thoughts.  Maybe it's not as easy as it seems. 

Edit: I've run some simulations and think you're right, the optimal cutoff seems like it settles at some equilibrium that is not the obvious answer. 

 
Last edited by a moderator:
Yeah, I thought of that and at first concluded that it didn't matter.  But now I'm having second thoughts.  Maybe it's not as easy as it seems. 
I thought of that initially too - interaction with the other guy... But aren't we maximizing our chance of winning by determining our own optimal cutoff point regardless of the strategy the other guy uses?...

Since he's in the same situation as we are, at worst we get a 50-50 chance of winning if he uses the optimal cutoff also - and our chances of winning just goes up if he uses a sub-optimal strategy?

 
I thought of that initially too - interaction with the other guy... But aren't we maximizing our chance of winning by determining our own optimal cutoff point regardless of the strategy the other guy uses?...

Since he's in the same situation as we are, at worst we get a 50-50 chance of winning if he uses the optimal cutoff also - and our chances of winning just goes up if he uses a sub-optimal strategy?
So here's what I've seen via simulation that I'm trying to make sense of:

A player's expected value is highest when he uses 0.5 as a cutoff.  This is intuitive, since if his first number is less than 0.5 he's more likely than not to improve by trying again, and vice versa.  Using a cutoff of 0.5 yields an expected value of 0.625.

On the one hand, it would then seem that using any other cutoff would be suboptimal.  However, if Player 2 assumes Player 1 is using this simple strategy, then he knows on average he's going to have to beat a score of 0.625 to win.  So perhaps he should use that as a cutoff instead (like on the Price is Right, it normally wouldn't make sense to spin the big wheel again if your first spin was $0.90, but if the guy before you already got $0.95 you have to spin again).  

And it looks like using a cutoff of 0.625 does indeed result in a lower expected value, BUT it results in Player 2 winning more frequently!  

So the obvious response is, if Player 1 knows that this is what Player 2 would do, then he will also adjust to use a cutoff of Player 2's expected value (which looks to be something like 0.617).  And then Player 2 would do the same, etc.  And this rapidly converges to some value around 0.617 where the two players have roughly a 50/50 shot of winning the prize. 

No idea if that's actually correct, I usually start with simulations to get an idea of the answer and then figure out how to prove it analytically, so that's the next step. :shrug:

 
I agree that we don't have to guess what the other guy does.

Experimentally, there's not a huge range of probabilities.  I ran trials with "cutoff" values (i.e., if the first number is < the cutoff, go again) ranging from 0.1 to 0.9, and the average result ranged from 0.542 to 0.628, with the "peak" (such as it is) somewhere in the 0.44 to 0.56 range.  Which is centered suspiciously around the 50% mark.

The curve is so flat that it's really hard to find a true optimum among the bits of noise.

 
On the one hand, it would then seem that using any other cutoff would be suboptimal.  However, if Player 2 assumes Player 1 is using this simple strategy, then he knows on average he's going to have to beat a score of 0.625 to win.  So perhaps he should use that as a cutoff instead (like on the Price is Right, it normally wouldn't make sense to spin the big wheel again if your first spin was $0.90, but if the guy before you already got $0.95 you have to spin again).  
Good stuff here, Vizzini.

 
What are the odds that if given two random numbers, both would be less than .666 (2/3rds)?  If you're first one is that or higher, stand pat.  If it isn't, go again.

Of course, this leads to the possibility of getting .65 and wanting to go again with a 2/3rds shot of doing worse.  Somewhere around .61 "sounds" right.

 
So here's what I've seen via simulation that I'm trying to make sense of:

A player's expected value is highest when he uses 0.5 as a cutoff.  This is intuitive, since if his first number is less than 0.5 he's more likely than not to improve by trying again, and vice versa.  Using a cutoff of 0.5 yields an expected value of 0.625.

On the one hand, it would then seem that using any other cutoff would be suboptimal.  However, if Player 2 assumes Player 1 is using this simple strategy, then he knows on average he's going to have to beat a score of 0.625 to win.  So perhaps he should use that as a cutoff instead (like on the Price is Right, it normally wouldn't make sense to spin the big wheel again if your first spin was $0.90, but if the guy before you already got $0.95 you have to spin again).  

And it looks like using a cutoff of 0.625 does indeed result in a lower expected value, BUT it results in Player 2 winning more frequently!  

So the obvious response is, if Player 1 knows that this is what Player 2 would do, then he will also adjust to use a cutoff of Player 2's expected value (which looks to be something like 0.617).  And then Player 2 would do the same, etc.  And this rapidly converges to some value around 0.617 where the two players have roughly a 50/50 shot of winning the prize. 

No idea if that's actually correct, I usually start with simulations to get an idea of the answer and then figure out how to prove it analytically, so that's the next step. :shrug:
Top of my head and trying to play devil's advocate here (no empirical validations)... We agree that the optimal 'stand alone' cutoff is at 0.5 - which gives an expected value of 0.625.... If we suppose P2 believes that P1 uses that strategy and he goes for a 0.625 cutoff on his own - we know that it yields an expected value lower than 0.625 (since it is optimal at 0.5) - meaning that P1-0.5 strategy versus P2-0.625 strategy... Results in P1 winning more often than P2 (the opposite of the conclusion above).

 
Top of my head and trying to play devil's advocate here (no empirical validations)... We agree that the optimal 'stand alone' cutoff is at 0.5 - which gives an expected value of 0.625.... If we suppose P2 believes that P1 uses that strategy and he goes for a 0.625 cutoff on his own - we know that it yields an expected value lower than 0.625 (since it is optimal at 0.5) - meaning that P1-0.5 strategy versus P2-0.625 strategy... Results in P1 winning more often than P2 (the opposite of the conclusion above).
I'm guessing the apparent paradox is resolved sort of like a presidential candidate winning the popular vote but losing the election - the magnitude of Player 2's losses end up being greater than the magnitude of his wins, so overall he has a lower average score but ends up with more wins.  Something like that, probably. 

 
.5 chance of getting .5 or better the first time, with an ev of .75 because you'll throw out anything with an actual value of .5 or worse,  and ev of .5 the second time is .5, so

.5 (.75) + .5 (.5) = .625 ev if you use .5 as a cut off

.4 (.8) + .6 (.5) = .62 ev if you use a .6 cut off. 

.6 (.7) + .4 (.5) = .62 ev if you use a .4 cut off

Interesting. 

 
So here's what I've seen via simulation that I'm trying to make sense of:

A player's expected value is highest when he uses 0.5 as a cutoff.  This is intuitive, since if his first number is less than 0.5 he's more likely than not to improve by trying again, and vice versa.  Using a cutoff of 0.5 yields an expected value of 0.625.

On the one hand, it would then seem that using any other cutoff would be suboptimal.  However, if Player 2 assumes Player 1 is using this simple strategy, then he knows on average he's going to have to beat a score of 0.625 to win.  So perhaps he should use that as a cutoff instead (like on the Price is Right, it normally wouldn't make sense to spin the big wheel again if your first spin was $0.90, but if the guy before you already got $0.95 you have to spin again).  

And it looks like using a cutoff of 0.625 does indeed result in a lower expected value, BUT it results in Player 2 winning more frequently!  

So the obvious response is, if Player 1 knows that this is what Player 2 would do, then he will also adjust to use a cutoff of Player 2's expected value (which looks to be something like 0.617).  And then Player 2 would do the same, etc.  And this rapidly converges to some value around 0.617 where the two players have roughly a 50/50 shot of winning the prize. 

No idea if that's actually correct, I usually start with simulations to get an idea of the answer and then figure out how to prove it analytically, so that's the next step. :shrug:
But Player 2's goal isn't to beat Player 1's average value, right? It's to beat his "naive strategy" of staying on anything above 0.5 (which we already know yields a 0.625 AV).

If Player 2 draws, say, 0.6, he knows that against P1's naive strategy he has a 40% chance of winning if he stands pat (he'll beat P1 60% of the 50% of the time that P1's first number was <0.5, plus 100% of the 10% of the time it was between 0.5 and 0.6). Why would he throw that away for a random new number that has only a 37.5% chance of winning?

Despite the logic of the iterative GTO approach, I can't see how throwing away numbers higher than 0.5 makes real-world sense for either.

 
Top of my head and trying to play devil's advocate here (no empirical validations)... We agree that the optimal 'stand alone' cutoff is at 0.5 - which gives an expected value of 0.625.... If we suppose P2 believes that P1 uses that strategy and he goes for a 0.625 cutoff on his own - we know that it yields an expected value lower than 0.625 (since it is optimal at 0.5) - meaning that P1-0.5 strategy versus P2-0.625 strategy... Results in P1 winning more often than P2 (the opposite of the conclusion above).
Expected value doesn't really matter though. What matters is how often your number is higher than your opponents.

If you get a 0.5 on your first spin, you can expect that to beat your opponents first spin 50% of the time. But, if you assume your opponent will respin when his number is below 0.5, then he'll beat your 0.5 half of those times, for a total winning percent of 75%.

 
And this rapidly converges to some value around 0.617 where the two players have roughly a 50/50 shot of winning the prize
I think this will be the key to figuring out the answer mathematically. Whatever the optimum number is, you should have a 50% chance of beating it if you respin on anything below.

Let X be the optimal number, and P be the percent chance of beating that number with one spin (1-X). Your chances of beating X if you respin on everything below X are

P + (1-P)P = 0.5

P^2 - 2P + 0.5 = 0

Solving with the quadratic equation

P = 0.2928932188134524

Which means X is 0.7071067811865476

 

Users who are viewing this thread

Top