Some people say that they dislike risk. The fact is, risks are unavoidable. For example, the only way to avoid making mistakes is by doing nothing, but doing nothing actually risks missing all the opportunities. Instead of asking "how much risk to take", one should make informed decisions about "what risks to take".
Some people dislike taking risks. Being risk-averse is easy: one can reject all proposals that bear the slightest risk. Unfortunately, nothing is risk-free in life. Being 100% risk-averse means doing nothing. Doing nothing actually risks making another type of mistakes. By rejecting a proposal, one runs the risk of missing an opportunity. By drawing negative conclusions over a diagnosis, one risks failing to spot a problem. This will be explained in the next example.
If today you ask me "will there be an earthquake in Japan tomorrow?", I would say "no". If you ask me again tomorrow, I would say "no" again. In fact, I would answer "no" everyday. Now my "predictions", if you could call them predictions, tend to be very "accurate". This is because it is correct all the time, except the times when earthquake does happen in Japan. But my "predictions" completely useless, because they fail to predict any of the earthquakes! In other words, being "accurate" most of the time is irrelevant, unless my predictions manage to spot events that matter at least some of the time.
In the earthquake example, predictions will be judged by how many earthquakes they manage to predict correctly. If I predict "earthquake will happen tomorrow", but in fact, there is no earthquake tomorrow, then I have made a "Type I error", or concluded a "false positive". In statistical terms, this is the error of accepting a hypothesis that is actually false.
Note: one must not confuse the technical term "positive" with "desirable outcome" in common sense. An earthquake is undesirable, and therefore not "positive" in common sense. In the statistical test, "positive" means the proposition that "there is earthquake in Japan tomorrow" is true.
By rejecting an investment that turns out to be profitable, one makes a "Type II error", or concluding a "false negative". Similarly, if I predict that "no earthquake will happen tomorrow", but an earthquake does happened tomorrow, then I have made a Type II error. In statistical terms, this is the error of rejecting a hypothesis that is actually true.
In statistical terms, the "precision" of one's predictions is the percentage of times that one's prediction is correct when one predicts "positive". In the earthquake example, "precision" measures how often there is indeed an earthquake when one predicts one. "Precision" captures Type I errors. On the other hand, the "recall" of one's predictions is the percentage of earthquakes that one predicts. It captures the amount of Type II errors that one makes.
Unless one has a crystal ball, one is likely to make one error or the other, i.e. Type I or Type II, errors. In the earthquake example, if one wants to avoid all Type I errors, one must predict "no" all the time. But earthquakes do happen in Japan. Therefore, the cost of making no Type I Error is to make every Type II error.
On the other hand, if one predicts earthquake everyday, then one will certainly "predict" every earthquake that happens. But that means making all possible Type I Errors (i.e. predicting many earthquakes that do not happen), though no Type II error at all (i.e. not missing a single disaster).
This example shows that, unless one has a crystal ball, risk in inevitable. One has to balance between making a Type I or Type II error. The term "risk averse" only makes sense if one pays all attention on one type of error only. In investment terms, being risk averse typically means avoiding Type I errors. This can be achieved by making safe investments only, i.e. rejecting all investment propositions but the ones that involve low risk.
Remarks on reality | ||||
Negative | Positive | |||
Reality | Negative | E.g. I predicted that there will be no earthquake; an earthquake did not happen in reality. That means my prediction was correct. E.g. 300 instances |
Type I Error E.g. I predicted that there will be an earthquake, but in reality, earthquake did not happen. E.g. I thought this was an investment opportunity, but it turned out to be non-profitable. E.g. 200 instances |
We assume no control over whether the reality is negative or positive; i.e. these are taken as constants.
E.g. 300 + 200 = 500 negative instances
|
Positive | Type II Error E.g. I predicted that there will be no earthquake, but in reality, earthquake did happen. E.g. I rejected all investment decisions, so I missed all investment opportunities. E.g. 100 instances |
E.g. I predicted that there will be an earthquake; an earthquake did happen in reality. That means my prediction was correct. E.g. 400 instances |
Recall: the percentage of correct positive decisions (TP) out of all positive situations in reality (i.e. TP
divided by (FN+TP))
E.g. Recall = 400 ÷ (100+400) = 80% |
|
Remarks on predictions: | I cannot change the reality. I can only control my predictions, which is either Negative or Positive in this context. | Precision: the percentage of correct positive decisions (TP) out of all positive decisions (i.e. TP divided by (FP+TP))
E.g. Precision = 400 ÷ (200+400) = 66.7% |
My overall accuracy is (TN + TP) divided by all instances.
E.g. Accuracy = (300+400) ÷ (300+200+100+400) = 70% |
[End]
All Rights Reserved