The metrics (precision, recall, F1) computed from a confusion matrix are designed for binary classification problems.

- |
predicted 0 |
predicted 1 |

**truly 0** |
TP |
FP |

**truly 1** |
FN |
TN |

`precision = TP / (TP+FP)`

`recall = TP / (TP+FN)`

For a multiclass case (3 classes) the confusion matrix looks something like this:

- |
predicted 0 |
predicted 1 |
predicted 2 |

**truly 0** |
. |
. |
. |

**truly 1** |
. |
. |
. |

**truly 2** |
. |
. |
. |

You can see here that the idea of, let’s say, the *true negative* isn’t obvious. That’s because *true negative* is inherently talking about binary classification.

You can break this multiclass classifier into 3 binary classifiers using ‘one-vs-rest’ method by talking about just predicting one class at a time.

- |
predicted 0 |
predicted not 0 |

**truly 0** |
TP for 0 |
FP for 0 |

**truly not 0** |
FN for 0 |
TN for 0 |

Using this confusion matrix, the precision and recall can be calculated for **only the 0**^{th} class. Similarly, you can create two other confusion matrices for class 1 and 2 and compute the metrics separately from those.