Odd intuitions

Via Bruce Schneier, a good description of a common fallacy:

Imagine you've invented a machine to detect terrorists. It's good, about 90% accurate.

…you receive urgent information … that a potential attacker is in the building. Security teams seal every exit and all 3,000 people inside are rounded up to be tested.

The first 30 pass. Then, dramatically, a man … fails. Police pounce, guns point.

How sure are you that this person is a terrorist?

A. 90%
B. 10%
C. 0.3%

Last 5 posts by David Byron


  1. PLW says

    Depends on your prior.. are we assuming 1/3000 is a terrorists to begin with, or might there be a cell?

  2. says

    C is the best answer (* but not exactly right! You've already tested 10 people. I'm a mathematician)

    There's a 10% chance of a false positive for each person. There are 3000 people. You'll have at least 300 false positives… Let's assume there's one terrorist. Then only 1/300 positives will be the terrorist.

    This same "paradox" is why mandatory HIV testing is a rotten idea, given the false positive rate. You'll have more false positives than real positives.

  3. says

    Here's a question — are we also assuming that the device only gives false positives, not false negatives? It seems that you're assuming that the terrorist WILL be among the 300 positives. What if the terrorist is among the 2700 negatives?

  4. David Schwartz says

    I think answer B is correct, it's ten percent. You have no idea that your information is perfectly accurate. There could be zero terrorists in the building. Everyone in the building could be a terrorist — heck, it could be a terrorist headquarters and the first few people were all false negatives.

    Where does it say that there's exactly one terrorist in the building? Why is that a reasonable assumption?

  5. David Schwartz says

    Ahh, I see the problem. Critical information about the problem was eliminated in the excerpt. The original problem description gives you good reason to assume that there's likely to be one terrorist in the building.

    (The way I've usually heard this explained is by a person who, analyzing their risk factors, has about a one in 10,000 chance of having AIDS. A 99% accurate AIDS test comes back positive. What's their odds of actually having AIDS?)

  6. says

    Critical information about the problem was eliminated in the excerpt

    Hence the ellipses and of course the link; the latter was the focus of the post and the only reason to provide the teaser….

    Rum thing, this interweb.

  7. David Schwartz says

    PLW: And that's the key insight to avoiding the base rate fallacy. Given that a test gave a particular result, you need both the accuracy of the test and the base rate to assess the probability that the test was accurate.

    For a 90% accurate test, here's how it breaks down:
    If there is one terrorist in the building: .29%
    If there are ten: 2.9%
    If there are thirty: 8.3
    You need 37 terrorists to make 10%.
    If there are 100 terrorists, it's 23%.
    If there are 300 terrorists, the odds that this guy is a terrorist are 50%.

    This assumes that the test is 90% likely to identify a terrorist as a terrorist and also 90% likely to misidentify an innocent person as a terrorist.

  8. Patrick says

    More interesting than the question posed is how the machine's reliability was tested and established at 90% in the first place.

    But that's a question for engineers. Everyone's a lay mathematician. No one's a lay engineer.

  9. says

    More interesting than the question posed is how the machine’s reliability was tested and established at 90% in the first place.

    After testing a sample group, everyone was waterboarded until it could be accurately determined whether they were honest about their knowledge of terrorist networks.

  10. Mark says

    I invented a fantastic terrorist-detector, but it's hard to get terrorists to volunteer for product testing.

    My Craigslist ads go unanswered. :(