When a decision is made without complete information, there are four possible outcomes.
For example, consider the question
Does Person X pose a terrorist
threat? Absent a specific terrorist act tied to Person X, the question
cannot be answered with certainty.
However, the whole point of asking the question is to try to identify terrorism threats before they happen.
Either the available information and the processes through which you arrive at an answer identify the person as a threat or they do not.
The desirable outcomes are:
Person X is identified as a terrorism threat when he really is a terrorist.
Person X is identified as not posing a terrorism threat when he really is not a terrorist.
Those are the desirable outcomes, but the fact that the decision to identify whether Person X poses a threat must be made with incomplete information (before he boards a plane, before he drives into Times Square, before he is allowed in the country etc), means it is also possible for one to arrive at a wrong conclusion based on the available information and the processes which turn that information into an answer.
The two possible erroneous outcomes are:
The person is identified as posing a terrorism threat when he really is not a terrorist (Type I error, commonly known as a false positive).
The person is not identified as posing a terrorism threat when he really is a terrorist (Type II error, commonly known as a false negative).
With given information and given processes, there is a trade off between the probabilities of commiting Type I error and Type II error.
Clearly, one way to make sure you never accidentally identify someone as a terrorist is to assume no one poses a terrorist threat. Of course, given that terrorists and terrorism exists, this is likely to lead to a huge number of threats going undetected until it is too late.
Conversely, one way to make sure you never accidentally fail to identify a terrorism threat is to assume practically everyone is a terrorist. This approach is undesirable at many levels but it has been practiced many times throughout history: For example, Salem Witch Trials, Japanese-American internments during World War II etc.
Therefore, one must accept at the outset that both types of errors will occur from time to time.
For example, in the case of the six imams, it looks like a Type I error was committed by security personnel and those imams were mistakenly identified as posing a terrorism threat when they indeed did not.
Conversely, in the case of Nidal Malik Hasan, the authorities committed Type II errors (i.e. failed multiple times to identify a threat before the actual event).
The fact that both types of errors naturally arise whenever decisions have to be made with incomplete information does not mean that we should not worry about them.
However, we have to think very hard how to allocate scarce resources: Which type of error should we try the hardest to avoid?
For example, if you assume every person who has ever traveled to Pakistan to be a threat, you will waste a lot of time and resources chasing down non-threats. At least some of those scarce resources could be put to better use in a different way.
And, of course, if you assume every such person is just an innocent tourist, you are going to miss a lot of threats.
Therefore, the optimal balance of vigilance and permissiveness depends on the consequences of these decisions and the associated errors.
It seems to me that the current balance in the United States is too biased in terms of avoiding Type I error. This is somewhat unavoidable, because, given that those with terrorist tendencies form a very small proportion of all people who have ever traveled to Pakistan and an even smaller proportion of all Muslims, it will very often be the right decision not to identify someone as a terrorist threat and there will be far more opportunities to make Type I errors.
On the other hand, it is not 1942 any more: There are no Muslim internment
camps in the U.S. The real cost of mistakenly being identified as a threat is
really nothing more than an inconvenience. However, given the hysteria
generated, say in the six imam case, the cost to an individual officer or
airline personnel can be far greater than justifiable (
See something? Say something is all well and good but what happens when saying something makes you the target of a lawsuit?).
This breeds a subconscious and sometimes overt tendency to avoid identifying people as threats even when they are broadcasting their threats (e.g., consider Nidal Malik Hasan).
The cost of failing to identify a threat is huge compared to that. And, this cost affects everyone from people who do travel to Pakistan to visit family and friends or buy rugs to people who cannot point Pakistan on a map.
It seems to me the most straightforward way to avoid widespread anti-Muslim sentiment is to try to prevent terrorist attacks by Muslim extremists. After all, there probably would not have been a Japanese internment camps without Pearl Harbor (this is not an excuse, but the attack was a relevant factor in how those camps came about).
And, so far, the most serious errors seem to have been Type II errors. While I am very happy that authorities are very careful to avoid wholesale discrimination against large groups of people, I am also worried enough about my own safety and the safety of ones I love that I think it is time to take a hard look at why.
Update: Consider the AP headline NY car bomb suspect cooperates, but motive mystery and ask yourself what that says about the prevailing mindset (HT Roger Simon).