kutta said:
Adam Harstad said:
I'm assuming you're referring to the Monty Hall problem? I understand it very well. If Monty has no foreknowledge of what is behind the doors, and he happens to reveal a goat, then you should switch. If Monty has perfect foreknowledge of what is behind the doors and deliberately reveals a goat every time, then whether you switch or stay will have no impact on your odds. If Monty is a profit-maximizing entity with perfect foreknowledge, then the contestant should assume that he only reveals a goat when the contestant has already selected the car in an effort to trick them into switching, and the contestant should stay put.
The fact that Monty's foreknowledge of what's behind the doors actually has an impact on your odds of selecting the car by switching should provide the best illustration imaginable that odds are not an actual thing intrinsic to the event itself, but rather a human creation based on incomplete information.
I'm just jumping in here, but I think you are wrong. Maybe it's been addressed, but I'm too lazy to read 18 pages to find out.
When you initially pick, you have a 1/3 chance of picking the prize, and a 2/3 chance of picking the goat. If you go into the game telling yourself that no matter what, when Monty reveals the goat (because he has knowledge of the situation and knows where the goat is), you will switch doors, there is now a 2/3 chance you will have the prize. Pretend there are 1000 doors and Monty knows which ones all have goats. If you pick one, you have a 1/1000 chance of picking the prize, and 999/1000 chance of picking the goat. If he reveals all 998 doors with goats, of course you should switch. There is 999/1000 chance there is a prize behind that door.
I think SSOG gets the problem, but just got ahead of himself writing it all out.
My main problem is the assertion that Kahneman's thesis is valid in all operations.
Sadly, we humans don't have an infinite number of moments in our life to make always taking the higher expected outcome the viable choice.
The easiest way to illustrate this is with cash:
Someone gives you a billion dollars.
The same person then offers you a coin flip.
If you call the coin correctly you win three billion dollars, if you don't you lose your billion.
We all walk away, even though we are leaving five hundred million on the table.
Humans were raised through evolution to be risk-averse and it's generally held us in good stead.
We couldn't afford to risk not having enough to eat, even if that risk was small, to hunt surplus.
It took the agricultural revolution to solve that one.
Would we have been better off if we were still risk-averse and said we couldn't afford to not have money for the mortgage, even if there was only a small risk of loss by investing in the stock market?
In an infinite system with infinite attempts, sure maximizing value is great. In our world, sometimes we shouldn't.
It's the whole expected points argument to not punt writ large.