Resolved: Confusion matrix
I've seen many version of confusion matrix that have mixed labels, for example the FN and FP are switched despite having the same other arrangement as yours. wouldn't make more sense if the model pridicted a wrong N and we label it as FN instead of what you label it as FP. is there a standard for how these labels are assigned or we can just assign whatever and change the interpretation.
1 answers ( 1 marked as helpful)
Hello Doaa,
Thanks for reaching ut!
Great question!
The confusion matrix can sometimes appear with different label placements depending on the convention used, which can indeed be confusing. However, there is a standard way to assign these labels based on the definitions of True Positives (TP), False Positives (FP), False Negatives (FN), and True Negatives (TN).
In the standard format:
True Positive (TP): The model correctly predicts a positive case.
False Positive (FP): The model incorrectly predicts a positive case when it should be negative (also called a Type I error).
False Negative (FN): The model incorrectly predicts a negative case when it should be positive (also called a Type II error).
True Negative (TN): The model correctly predicts a negative case.
The confusion usually arises because some sources switch the row and column orientation of the matrix. In the most commonly used layout:
Rows: Represent the actual values (e.g., Actual Positive/Negative).
Columns: Represent the predicted values (e.g., Predicted Positive/Negative).
If the rows and columns are swapped, the placement of FN and FP will also switch, leading to the differences you've observed. The key takeaway is that as long as the interpretation remains consistent with the definitions of FP and FN, the orientation can be adapted—just be clear on how it's being presented.
Hope this helps clarify things!
Best,
The 365 Team
Thanks for reaching ut!
Great question!
The confusion matrix can sometimes appear with different label placements depending on the convention used, which can indeed be confusing. However, there is a standard way to assign these labels based on the definitions of True Positives (TP), False Positives (FP), False Negatives (FN), and True Negatives (TN).
In the standard format:
True Positive (TP): The model correctly predicts a positive case.
False Positive (FP): The model incorrectly predicts a positive case when it should be negative (also called a Type I error).
False Negative (FN): The model incorrectly predicts a negative case when it should be positive (also called a Type II error).
True Negative (TN): The model correctly predicts a negative case.
The confusion usually arises because some sources switch the row and column orientation of the matrix. In the most commonly used layout:
Rows: Represent the actual values (e.g., Actual Positive/Negative).
Columns: Represent the predicted values (e.g., Predicted Positive/Negative).
If the rows and columns are swapped, the placement of FN and FP will also switch, leading to the differences you've observed. The key takeaway is that as long as the interpretation remains consistent with the definitions of FP and FN, the orientation can be adapted—just be clear on how it's being presented.
Hope this helps clarify things!
Best,
The 365 Team