Popular perceptions of randomness are frequently wrong, based on logical fallacies. The following is an attempt to identify the source of such fallacies and correct the logical errors.
The Gambler's Fallacy.
The gambler's fallacy is a formal fallacy. It is the incorrect belief that the likelihood of a random event can be affected by or predicted from other, independent events.
The gambler's fallacy gets its name from the fact that, where the random event is the throw of a dice or the spin of a roulette wheel, gamblers will risk money on their belief in "a run of luck" or a mistaken understanding of "the law of averages". It often arises because a similarity between random processes is mistakenly interpreted as a predictive relationship between them. (For instance, two fair dice are similar in that they each have the same chances of yielding each number - but they are independent in that they do not actually influence one another.)
A more subtle version of the fallacy is that an "interesting" (non-random looking) outcome is "unlikely" (eg that a sequence of "1,2,3,4,5,6" in a lottery result is less likely than any other individual outcome). Even apart from the debate about what constitutes an "interesting" result, this can be seen as a version of the gambler's fallacy because it is saying that a random event is less likely to occur if the result, taken in conjunction with recent events, will produce an "interesting" pattern.
The gambler's fallacy can be illustrated by considering the repeated toss of a coin. With a fair coin the chances of getting heads are exactly 0.5 (one in two). The chances of it coming up heads twice in a row are 0.5×0.5=0.25 (one in four). The probability of three heads in a row is 0.5×0.5×0.5= 0.125 (one in eight) and so on.
Now suppose that we have just tossed four heads in a row. A believer in the gambler's fallacy might say, "If the next coin flipped were to come up heads, it would generate a run of five successive heads. The probability of a run of five successive heads is (1 / 2)5 = 1 / 32; therefore, the next coin flipped only has a 1 in 32 chance of coming up heads."
This is the fallacious step in the argument. If the coin is fair, then by definition the probability of tails must always be 0.5, never more or less, and the probability of heads must always be 0.5, never less (or more). While a run of five heads is only 1 in 32 (0.03125), it is 1 in 32 before the coin is first tossed. After the first four tosses the results are no longer unknown, so they do not count. The probability of five consecutive heads is the same as four successive heads followed by one tails. Tails is no more likely. In fact, the calculation of the 1 in 32 probability relied on the assumption that heads and tails are equally likely at every step. Each of the two possible outcomes has equal probability no matter how many times the coin has been flipped previously and no matter what the result. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses is the fallacy. The fallacy is the idea that a run of luck in the past somehow influences the odds of a bet in the future. This kind of logic would only work, if we had to guess all the tosses' results 'before' they are carried out. Let's say we are gambling on a HHHHH result, that is likely to constitute the significantly lesser chance to succeed.
Here are some other examples:
1.
What is the probability of flipping 21 heads in a row, with a fair coin? (Answer: 1 in 2,097,152 = approximately 0.000000477.) What is the probability of doing it, given that you have already flipped 20 heads in a row? (Answer: 0.5.)
2.
Are you more likely to win the lottery jackpot by choosing the same numbers every time or by choosing different numbers every time? (Answer: Either strategy is equally likely to win.)
3.
Are you more or less likely to win the lottery jackpot by picking the numbers which won last week, or picking numbers at random? (Answer: Either strategy is equally likely to win.)
(This does not mean that all possible choices of numbers within a given lottery are equally good.
While the odds of winning may be the same regardless of which numbers are chosen, the expected payout is not, because of the possibility of having to share that jackpot with other players. A rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers.)
A number is "due"
This argument says that "since all numbers will eventually appear in a random selection, those that have not come up yet are 'due' and thus more likely to come up soon". This logic is only correct if applied to a system where numbers that come up are removed from the system, such as when playing cards are drawn and not returned to the deck. It's true, for example, that once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, there is an equal chance of drawing a jack or any other card the next time. The same truth applies to any other case where objects are selected independently and nothing is removed from the system after each event, such as a die roll, coin toss or most lottery number selection schemes. A way to look at it is to note that random processes such as throwing coins don't have memory, making it impossible for past outcomes to affect the present and future.
A number is "cursed"
This argument is almost the reverse of the above, and says that numbers which have come up less often in the past will continue to come up less often in the future. A similar "number is 'blessed'" argument might be made saying that numbers which have come up more often in the past are likely to do so in the future. This logic is only valid if the roll is somehow biased and results don't have equal probabilities — for example, with weighted dice. If we know for certain that the roll is fair, then previous events have no influence over future events.
Note that in nature, unexpected or uncertain events rarely occur with perfectly equal frequencies, so learning which events are likely to have higher probability by observing outcomes makes sense. What is fallacious is to apply this logic to systems which are specially designed so that all outcomes are equally likely — such as dice, roulette wheels, and so on.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment