如何在Python中编写混淆矩阵?
machine-learning
python
5
0

我用Python编写了一个混淆矩阵计算代码:

def conf_mat(prob_arr, input_arr):
        # confusion matrix
        conf_arr = [[0, 0], [0, 0]]

        for i in range(len(prob_arr)):
                if int(input_arr[i]) == 1:
                        if float(prob_arr[i]) < 0.5:
                                conf_arr[0][1] = conf_arr[0][1] + 1
                        else:
                                conf_arr[0][0] = conf_arr[0][0] + 1
                elif int(input_arr[i]) == 2:
                        if float(prob_arr[i]) >= 0.5:
                                conf_arr[1][0] = conf_arr[1][0] +1
                        else:
                                conf_arr[1][1] = conf_arr[1][1] +1

        accuracy = float(conf_arr[0][0] + conf_arr[1][1])/(len(input_arr))

prob_arr是我的分类代码返回的数组,并且示例数组如下所示:

 [1.0, 1.0, 1.0, 0.41592955657342651, 1.0, 0.0053405015805891975, 4.5321494433440449e-299, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.70943426182688163, 1.0, 1.0, 1.0, 1.0]

input_arr是数据集的原始类标签,如下所示:

[2, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1]

我的代码试图做的是:我得到prob_arr和input_arr,对于每个类(1和2),我检查它们是否被错误分类。

但是我的代码仅适用于两个类。如果我为多个分类的数据运行此代码,它将无法正常工作。如何为多个班级做这个?

例如,对于具有三个类的数据集,它应返回: [[21,7,3],[3,38,6],[5,4,19]]

参考资料:
Stack Overflow
收藏
评论
共 9 个回答
高赞 时间 活跃

这是一个支持精美打印等的混淆矩阵类:

http://nltk.googlecode.com/svn/trunk/doc/api/nltk.metrics.confusionmatrix-pysrc.html

收藏
评论

Scikit-Learn提供了confusion_matrix函数

from sklearn.metrics import confusion_matrix
y_actu = [2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]
y_pred = [0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]
confusion_matrix(y_actu, y_pred)

输出一个Numpy数组

array([[3, 0, 0],
       [0, 1, 2],
       [2, 1, 3]])

但是您也可以使用Pandas创建混淆矩阵:

import pandas as pd
y_actu = pd.Series([2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2], name='Actual')
y_pred = pd.Series([0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2], name='Predicted')
df_confusion = pd.crosstab(y_actu, y_pred)

您将获得一个(贴有标签的)Pandas DataFrame:

Predicted  0  1  2
Actual
0          3  0  0
1          0  1  2
2          2  1  3

如果您增加margins=True

df_confusion = pd.crosstab(y_actu, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True)

您还将获得每一行和每一列的总和:

Predicted  0  1  2  All
Actual
0          3  0  0    3
1          0  1  2    3
2          2  1  3    6
All        5  2  5   12

您还可以使用以下方法获得标准化的混淆矩阵:

df_conf_norm = df_confusion / df_confusion.sum(axis=1)

Predicted         0         1         2
Actual
0          1.000000  0.000000  0.000000
1          0.000000  0.333333  0.333333
2          0.666667  0.333333  0.500000

您可以使用以下方式绘制此confusion_matrix

import matplotlib.pyplot as plt
def plot_confusion_matrix(df_confusion, title='Confusion matrix', cmap=plt.cm.gray_r):
    plt.matshow(df_confusion, cmap=cmap) # imshow
    #plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(df_confusion.columns))
    plt.xticks(tick_marks, df_confusion.columns, rotation=45)
    plt.yticks(tick_marks, df_confusion.index)
    #plt.tight_layout()
    plt.ylabel(df_confusion.index.name)
    plt.xlabel(df_confusion.columns.name)

plot_confusion_matrix(df_confusion)

情节混淆矩阵

或使用以下方法绘制归一化混淆矩阵:

plot_confusion_matrix(df_conf_norm)  

情节混淆矩阵标准化

您可能也对该项目https://github.com/pandas-ml/pandas-ml及其Pip包https://pypi.python.org/pypi/pandas_ml感兴趣

有了这个软件包,混乱矩阵可以被漂亮地打印,绘制。您可以对混淆矩阵进行二值化处理,获取类统计信息,例如TP,TN,FP,FN,ACC,TPR,FPR,FNR,TNR(SPC),LR +,LR-,DOR,PPV,FDR,FOR,NPV等统计

In [1]: from pandas_ml import ConfusionMatrix
In [2]: y_actu = [2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]
In [3]: y_pred = [0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]
In [4]: cm = ConfusionMatrix(y_actu, y_pred)
In [5]: cm.print_stats()
Confusion Matrix:

Predicted  0  1  2  __all__
Actual
0          3  0  0        3
1          0  1  2        3
2          2  1  3        6
__all__    5  2  5       12


Overall Statistics:

Accuracy: 0.583333333333
95% CI: (0.27666968568210581, 0.84834777019156982)
No Information Rate: ToDo
P-Value [Acc > NIR]: 0.189264302376
Kappa: 0.354838709677
Mcnemar's Test P-Value: ToDo


Class Statistics:

Classes                                        0          1          2
Population                                    12         12         12
P: Condition positive                          3          3          6
N: Condition negative                          9          9          6
Test outcome positive                          5          2          5
Test outcome negative                          7         10          7
TP: True Positive                              3          1          3
TN: True Negative                              7          8          4
FP: False Positive                             2          1          2
FN: False Negative                             0          2          3
TPR: (Sensitivity, hit rate, recall)           1  0.3333333        0.5
TNR=SPC: (Specificity)                 0.7777778  0.8888889  0.6666667
PPV: Pos Pred Value (Precision)              0.6        0.5        0.6
NPV: Neg Pred Value                            1        0.8  0.5714286
FPR: False-out                         0.2222222  0.1111111  0.3333333
FDR: False Discovery Rate                    0.4        0.5        0.4
FNR: Miss Rate                                 0  0.6666667        0.5
ACC: Accuracy                          0.8333333       0.75  0.5833333
F1 score                                    0.75        0.4  0.5454545
MCC: Matthews correlation coefficient  0.6831301  0.2581989  0.1690309
Informedness                           0.7777778  0.2222222  0.1666667
Markedness                                   0.6        0.3  0.1714286
Prevalence                                  0.25       0.25        0.5
LR+: Positive likelihood ratio               4.5          3        1.5
LR-: Negative likelihood ratio                 0       0.75       0.75
DOR: Diagnostic odds ratio                   inf          4          2
FOR: False omission rate                       0        0.2  0.4285714

我注意到一个名为PyCM的有关Confusion Matrix的新Python库已经发布:也许您可以看看。

收藏
评论

Scikit-learn (建议无论如何建议使用)已将其包含在metrics模块中:

>>> from sklearn.metrics import confusion_matrix
>>> y_true = [0, 1, 2, 0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 0, 0, 0, 1, 1, 0, 2, 2]
>>> confusion_matrix(y_true, y_pred)
array([[3, 0, 0],
       [1, 1, 1],
       [1, 1, 1]])
收藏
评论

更新资料

自从撰写本文以来,我已经更新了我的库实现,以包括其他一些不错的功能。与下面的代码一样,不需要第三方依赖项。该类还可以输出一个漂亮的列表表,类似于许多常用的统计包。看到这个要点

上述要点的示例用法

# Example Usage
actual      = ["A", "B", "C", "C", "B", "C", "C", "B", "A", "A", "B", "A", "B", "C", "A", "B", "C"]
predicted   = ["A", "B", "B", "C", "A", "C", "A", "B", "C", "A", "B", "B", "B", "C", "A", "A", "C"]

# Initialize Performance Class
performance = Performance(actual, predicted)

# Print Confusion Matrix
performance.tabulate()

这是输出示例:

===================================
        Aᴬ      Bᴬ      Cᴬ

Aᴾ      3       2       1

Bᴾ      1       4       1

Cᴾ      1       0       4

Note: classᴾ = Predicted, classᴬ = Actual
===================================

除了原始计数,我们还可以输出标准化的混淆矩阵(即按比例)

# Print Normalized Confusion Matrix
performance.tabulate(normalized = True)

===================================
        Aᴬ      Bᴬ      Cᴬ

Aᴾ      17.65%  11.76%  5.88%

Bᴾ      5.88%   23.53%  5.88%

Cᴾ      5.88%   0.00%   23.53%

Note: classᴾ = Predicted, classᴬ = Actual
===================================

一个简单的多类实现

使用香草Python可以在大约O(N)的时间内简单地计算出一个多类混淆矩阵。我们要做的就是将actual向量中找到的唯一类配对为二维列表。从那里,我们只需遍历压缩的actualpredicted向量并填充计数即可。

# A Simple Confusion Matrix Implementation
def confusionmatrix(actual, predicted, normalize = False):
    """
    Generate a confusion matrix for multiple classification
    @params:
        actual      - a list of integers or strings for known classes
        predicted   - a list of integers or strings for predicted classes
        normalize   - optional boolean for matrix normalization
    @return:
        matrix      - a 2-dimensional list of pairwise counts
    """
    unique = sorted(set(actual))
    matrix = [[0 for _ in unique] for _ in unique]
    imap   = {key: i for i, key in enumerate(unique)}
    # Generate Confusion Matrix
    for p, a in zip(predicted, actual):
        matrix[imap[p]][imap[a]] += 1
    # Matrix Normalization
    if normalize:
        sigma = sum([sum(matrix[imap[i]]) for i in unique])
        matrix = [row for row in map(lambda i: list(map(lambda j: j / sigma, i)), matrix)]
    return matrix

用法

# Input Below Should Return: [[2, 1, 0], [0, 2, 1], [1, 2, 1]]
cm = confusionmatrix(
    [1, 1, 2, 0, 1, 1, 2, 0, 0, 1], # actual
    [0, 1, 1, 0, 2, 1, 2, 2, 0, 2]  # predicted
)

# And The Output
print(cm)
[[2, 1, 0], [0, 2, 1], [1, 2, 1]]

注意: actual类别沿着列,而predicted类别沿着行。

# Actual
# 0  1  2
  #  #  #   
[[2, 1, 0], # 0
 [0, 2, 1], # 1  Predicted
 [1, 2, 1]] # 2

类名可以是字符串或整数

# Input Below Should Return: [[2, 1, 0], [0, 2, 1], [1, 2, 1]]
cm = confusionmatrix(
    ["B", "B", "C", "A", "B", "B", "C", "A", "A", "B"], # actual
    ["A", "B", "B", "A", "C", "B", "C", "C", "A", "C"]  # predicted
)

# And The Output
print(cm)
[[2, 1, 0], [0, 2, 1], [1, 2, 1]]

您还可以按比例返回矩阵(归一化)

# Input Below Should Return: [[0.2, 0.1, 0.0], [0.0, 0.2, 0.1], [0.1, 0.2, 0.1]]
cm = confusionmatrix(
    ["B", "B", "C", "A", "B", "B", "C", "A", "A", "B"], # actual
    ["A", "B", "B", "A", "C", "B", "C", "C", "A", "C"], # predicted
    normalize = True
)

# And The Output
print(cm)
[[0.2, 0.1, 0.0], [0.0, 0.2, 0.1], [0.1, 0.2, 0.1]]

从多分类混淆矩阵中提取统计信息

一旦有了矩阵,就可以计算一堆统计数据来评估分类器。也就是说,从混淆矩阵设置中提取值以进行多个分类可能有些令人头疼。这是一个按类返回混淆矩阵和统计信息的函数:

# Not Required, But Nice For Legibility
from collections import OrderedDict

# A Simple Confusion Matrix Implementation
def confusionmatrix(actual, predicted, normalize = False):
    """
    Generate a confusion matrix for multiple classification
    @params:
        actual      - a list of integers or strings for known classes
        predicted   - a list of integers or strings for predicted classes
    @return:
        matrix      - a 2-dimensional list of pairwise counts
        statistics  - a dictionary of statistics for each class
    """
    unique = sorted(set(actual))
    matrix = [[0 for _ in unique] for _ in unique]
    imap   = {key: i for i, key in enumerate(unique)}
    # Generate Confusion Matrix
    for p, a in zip(predicted, actual):
        matrix[imap[p]][imap[a]] += 1
    # Get Confusion Matrix Sum
    sigma = sum([sum(matrix[imap[i]]) for i in unique])
    # Scaffold Statistics Data Structure
    statistics = OrderedDict(((i, {"counts" : OrderedDict(), "stats" : OrderedDict()}) for i in unique))
    # Iterate Through Classes & Compute Statistics
    for i in unique:
        loc = matrix[imap[i]][imap[i]]
        row = sum(matrix[imap[i]][:])
        col = sum([row[imap[i]] for row in matrix])
        # Get TP/TN/FP/FN
        tp  = loc
        fp  = row - loc
        fn  = col - loc
        tn  = sigma - row - col + loc
        # Populate Counts Dictionary
        statistics[i]["counts"]["tp"]   = tp
        statistics[i]["counts"]["fp"]   = fp
        statistics[i]["counts"]["tn"]   = tn
        statistics[i]["counts"]["fn"]   = fn
        statistics[i]["counts"]["pos"]  = tp + fn
        statistics[i]["counts"]["neg"]  = tn + fp
        statistics[i]["counts"]["n"]    = tp + tn + fp + fn
        # Populate Statistics Dictionary
        statistics[i]["stats"]["sensitivity"]   = tp / (tp + fn) if tp > 0 else 0.0
        statistics[i]["stats"]["specificity"]   = tn / (tn + fp) if tn > 0 else 0.0
        statistics[i]["stats"]["precision"]     = tp / (tp + fp) if tp > 0 else 0.0
        statistics[i]["stats"]["recall"]        = tp / (tp + fn) if tp > 0 else 0.0
        statistics[i]["stats"]["tpr"]           = tp / (tp + fn) if tp > 0 else 0.0
        statistics[i]["stats"]["tnr"]           = tn / (tn + fp) if tn > 0 else 0.0
        statistics[i]["stats"]["fpr"]           = fp / (fp + tn) if fp > 0 else 0.0
        statistics[i]["stats"]["fnr"]           = fn / (fn + tp) if fn > 0 else 0.0
        statistics[i]["stats"]["accuracy"]      = (tp + tn) / (tp + tn + fp + fn) if (tp + tn) > 0 else 0.0
        statistics[i]["stats"]["f1score"]       = (2 * tp) / ((2 * tp) + (fp + fn)) if tp > 0 else 0.0
        statistics[i]["stats"]["fdr"]           = fp / (fp + tp) if fp > 0 else 0.0
        statistics[i]["stats"]["for"]           = fn / (fn + tn) if fn > 0 else 0.0
        statistics[i]["stats"]["ppv"]           = tp / (tp + fp) if tp > 0 else 0.0
        statistics[i]["stats"]["npv"]           = tn / (tn + fn) if tn > 0 else 0.0
    # Matrix Normalization
    if normalize:
        matrix = [row for row in map(lambda i: list(map(lambda j: j / sigma, i)), matrix)]
    return matrix, statistics

计算统计

上面,混淆矩阵用于将每个类的统计信息制成表格,并以以下结构在OrderedDict中返回:

OrderedDict(
    [
        ('A', {
            'stats' : OrderedDict([
                ('sensitivity', 0.6666666666666666), 
                ('specificity', 0.8571428571428571), 
                ('precision', 0.6666666666666666), 
                ('recall', 0.6666666666666666), 
                ('tpr', 0.6666666666666666), 
                ('tnr', 0.8571428571428571), 
                ('fpr', 0.14285714285714285), 
                ('fnr', 0.3333333333333333), 
                ('accuracy', 0.8), 
                ('f1score', 0.6666666666666666), 
                ('fdr', 0.3333333333333333), 
                ('for', 0.14285714285714285), 
                ('ppv', 0.6666666666666666), 
                ('npv', 0.8571428571428571)
            ]), 
            'counts': OrderedDict([
                ('tp', 2), 
                ('fp', 1), 
                ('tn', 6), 
                ('fn', 1), 
                ('pos', 3), 
                ('neg', 7), 
                ('n', 10)
            ])
        }), 
        ('B', {
            'stats': OrderedDict([
                ('sensitivity', 0.4), 
                ('specificity', 0.8), 
                ('precision', 0.6666666666666666), 
                ('recall', 0.4), 
                ('tpr', 0.4), 
                ('tnr', 0.8), 
                ('fpr', 0.2), 
                ('fnr', 0.6), 
                ('accuracy', 0.6), 
                ('f1score', 0.5), 
                ('fdr', 0.3333333333333333), 
                ('for', 0.42857142857142855), 
                ('ppv', 0.6666666666666666), 
                ('npv', 0.5714285714285714)
            ]), 
            'counts': OrderedDict([
                ('tp', 2), 
                ('fp', 1), 
                ('tn', 4), 
                ('fn', 3), 
                ('pos', 5), 
                ('neg', 5), 
                ('n', 10)
            ])
        }), 
        ('C', {
            'stats': OrderedDict([
                ('sensitivity', 0.5), 
                ('specificity', 0.625), 
                ('precision', 0.25), 
                ('recall', 0.5), 
                ('tpr', 0.5), 
                ('tnr', 0.625), (
                'fpr', 0.375), (
                'fnr', 0.5), 
                ('accuracy', 0.6), 
                ('f1score', 0.3333333333333333), 
                ('fdr', 0.75), 
                ('for', 0.16666666666666666), 
                ('ppv', 0.25), 
                ('npv', 0.8333333333333334)
            ]), 
            'counts': OrderedDict([
                ('tp', 1), 
                ('fp', 3), 
                ('tn', 5), 
                ('fn', 1), 
                ('pos', 2), 
                ('neg', 8), 
                ('n', 10)
            ])
        })
    ]
)
收藏
评论

已经过去了将近十年,但是针对此职位的解决方案(没有sklearn)令人费解并且不必要地冗长。可以使用Python在几行中干净地完成计算混淆矩阵的工作。例如:

import numpy as np

def compute_confusion_matrix(true, pred):
  '''Computes a confusion matrix using numpy for two np.arrays
  true and pred.

  Results are identical (and similar in computation time) to: 
    "from sklearn.metrics import confusion_matrix"

  However, this function avoids the dependency on sklearn.'''

  K = len(np.unique(true)) # Number of classes 
  result = np.zeros((K, K))

  for i in range(len(true)):
    result[true[i]][pred[i]] += 1

  return result
收藏
评论

此函数为任意数量的类创建混淆矩阵。

def create_conf_matrix(expected, predicted, n_classes):
    m = [[0] * n_classes for i in range(n_classes)]
    for pred, exp in zip(predicted, expected):
        m[pred][exp] += 1
    return m

def calc_accuracy(conf_matrix):
    t = sum(sum(l) for l in conf_matrix)
    return sum(conf_matrix[i][i] for i in range(len(conf_matrix))) / t

与上面的函数相反,您必须根据分类结果(即sth)在调用函数之前提取预测的类。喜欢

[1 if p < .5 else 2 for p in classifications]
收藏
评论

如果您不希望scikit-learn为您完成工作...

    import numpy
    actual = numpy.array(actual)
    predicted = numpy.array(predicted)

    # calculate the confusion matrix; labels is numpy array of classification labels
    cm = numpy.zeros((len(labels), len(labels)))
    for a, p in zip(actual, predicted):
        cm[a][p] += 1

    # also get the accuracy easily with numpy
    accuracy = (actual == predicted).sum() / float(len(actual))

或在NLTK中查看更完整的实现。

收藏
评论

您可以使用numpy使代码更简洁,(有时)使其运行更快。例如,在两类情况下,您的函数可以重写为(请参见mply.acc() ):

def accuracy(actual, predicted):
    """accuracy = (tp + tn) / ts

    , where:    

        ts - Total Samples
        tp - True Positives
        tn - True Negatives
    """
    return (actual == predicted).sum() / float(len(actual))

,其中:

actual    = (numpy.array(input_arr) == 2)
predicted = (numpy.array(prob_arr) < 0.5)
收藏
评论

一个仅适用于numpy的解决方案,可用于不需要循环的任何数量的类:

import numpy as np

classes = 3
true = np.random.randint(0, classes, 50)
pred = np.random.randint(0, classes, 50)

np.bincount(true * classes + pred).reshape((classes, classes))
收藏
评论
新手导航
  • 社区规范
  • 提出问题
  • 进行投票
  • 个人资料
  • 优化问题
  • 回答问题

关于我们

常见问题

内容许可

联系我们

@2020 AskGo
京ICP备20001863号