Python:tf-idf-cosine:查找文档相似性
machine-learning
nltk
python
6
0

我正在关注第1 部分第2 部分中可用的教程。不幸的是,作者没有时间进行最后一节,涉及使用余弦相似度实际找到两个文档之间的距离。我在stackoverflow的以下链接的帮助下关注了本文中的示例,其中包括上述链接中提到的代码(只是为了使生活更轻松)

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from nltk.corpus import stopwords
import numpy as np
import numpy.linalg as LA

train_set = ["The sky is blue.", "The sun is bright."]  # Documents
test_set = ["The sun in the sky is bright."]  # Query
stopWords = stopwords.words('english')

vectorizer = CountVectorizer(stop_words = stopWords)
#print vectorizer
transformer = TfidfTransformer()
#print transformer

trainVectorizerArray = vectorizer.fit_transform(train_set).toarray()
testVectorizerArray = vectorizer.transform(test_set).toarray()
print 'Fit Vectorizer to train set', trainVectorizerArray
print 'Transform Vectorizer to test set', testVectorizerArray

transformer.fit(trainVectorizerArray)
print
print transformer.transform(trainVectorizerArray).toarray()

transformer.fit(testVectorizerArray)
print 
tfidf = transformer.transform(testVectorizerArray)
print tfidf.todense()

由于上述代码,我有以下矩阵

Fit Vectorizer to train set [[1 0 1 0]
 [0 1 0 1]]
Transform Vectorizer to test set [[0 1 1 1]]

[[ 0.70710678  0.          0.70710678  0.        ]
 [ 0.          0.70710678  0.          0.70710678]]

[[ 0.          0.57735027  0.57735027  0.57735027]]

我不确定如何使用此输出来计算余弦相似度,我知道如何针对相似长度的两个向量实现余弦相似度,但是在这里我不确定如何识别这两个向量。

参考资料:
Stack Overflow
收藏
评论
共 6 个回答
高赞 时间 活跃

我知道这是一个老帖子。但是我尝试了http://scikit-learn.sourceforge.net/stable/软件包。这是我的找到余弦相似度的代码。问题是您将如何计算该包的余弦相似度,这是我的代码

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer

f = open("/root/Myfolder/scoringDocuments/doc1")
doc1 = str.decode(f.read(), "UTF-8", "ignore")
f = open("/root/Myfolder/scoringDocuments/doc2")
doc2 = str.decode(f.read(), "UTF-8", "ignore")
f = open("/root/Myfolder/scoringDocuments/doc3")
doc3 = str.decode(f.read(), "UTF-8", "ignore")

train_set = ["president of India",doc1, doc2, doc3]

tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix_train = tfidf_vectorizer.fit_transform(train_set)  #finds the tfidf score with normalization
print "cosine scores ==> ",cosine_similarity(tfidf_matrix_train[0:1], tfidf_matrix_train)  #here the first element of tfidf_matrix_train is matched with other three elements

这里假设查询是train_set的第一个元素,而doc1,doc2和doc3是我想借助余弦相似度进行排名的文档。然后我可以使用此代码。

问题中提供的教程也非常有用。这里是所有它的零件部分-I , 部分II部分III

输出将如下所示:

[[ 1.          0.07102631  0.02731343  0.06348799]]

这里的1表示查询与自身匹配,其他三个表示将查询与相应文档匹配的分数。

收藏
评论

通过@excray注释的帮助,我设法弄清楚答案,实际上,我们需要做的是编写一个简单的for循环,以迭代表示火车数据和测试数据的两个数组。

首先实现一个简单的lambda函数来保存用于余弦计算的公式:

cosine_function = lambda a, b : round(np.inner(a, b)/(LA.norm(a)*LA.norm(b)), 3)

然后,只需编写一个简单的for循环即可遍历to向量,每个逻辑都是“对于trainVectorizerArray中的每个向量,您必须在testVectorizerArray中找到与向量的余弦相似度。”

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from nltk.corpus import stopwords
import numpy as np
import numpy.linalg as LA

train_set = ["The sky is blue.", "The sun is bright."] #Documents
test_set = ["The sun in the sky is bright."] #Query
stopWords = stopwords.words('english')

vectorizer = CountVectorizer(stop_words = stopWords)
#print vectorizer
transformer = TfidfTransformer()
#print transformer

trainVectorizerArray = vectorizer.fit_transform(train_set).toarray()
testVectorizerArray = vectorizer.transform(test_set).toarray()
print 'Fit Vectorizer to train set', trainVectorizerArray
print 'Transform Vectorizer to test set', testVectorizerArray
cx = lambda a, b : round(np.inner(a, b)/(LA.norm(a)*LA.norm(b)), 3)

for vector in trainVectorizerArray:
    print vector
    for testV in testVectorizerArray:
        print testV
        cosine = cx(vector, testV)
        print cosine

transformer.fit(trainVectorizerArray)
print
print transformer.transform(trainVectorizerArray).toarray()

transformer.fit(testVectorizerArray)
print 
tfidf = transformer.transform(testVectorizerArray)
print tfidf.todense()

这是输出:

Fit Vectorizer to train set [[1 0 1 0]
 [0 1 0 1]]
Transform Vectorizer to test set [[0 1 1 1]]
[1 0 1 0]
[0 1 1 1]
0.408
[0 1 0 1]
[0 1 1 1]
0.816

[[ 0.70710678  0.          0.70710678  0.        ]
 [ 0.          0.70710678  0.          0.70710678]]

[[ 0.          0.57735027  0.57735027  0.57735027]]
收藏
评论

让我再给你一个我写的教程。它回答了您的问题,但也解释了我们为什么要做某些事情。我还尝试使其简洁。

因此,您有一个list_of_documents只是一个字符串数组,而另一个document只是一个字符串。您需要从list_of_documents中找到与document最相似的document

让我们将它们组合在一起: documents = list_of_documents + [document]

让我们从依赖关系开始。我们将清楚为什么要使用它们中的每一个。

from nltk.corpus import stopwords
import string
from nltk.tokenize import wordpunct_tokenize as tokenize
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from scipy.spatial.distance import cosine

可以使用的一种方法是词袋方法,我们将文档中的每个词彼此独立地对待,然后将它们全部放在一起放在大袋子中。从一个角度来看,它失去了很多信息(例如单词的连接方式),但是从另一个角度来看,它使模型变得简单。

在英语和任何其他人类语言中,有很多“无用的”单词,例如“ a”,“ the”,“ in”,它们非常普遍,以至于它们没有很多含义。它们被称为停用词 ,删除它们是一个好主意。可以注意到的另一件事是,诸如“分析”,“分析器”,“分析”之类的词确实很相似。它们具有共同的词根,并且都可以转换为一个词。这个过程称为词干 ,存在不同的词干,它们在速度,攻击性等方面有所不同。因此,我们将每个文档转换为不带停用词的词干列表。同样,我们丢弃所有标点符号。

porter = PorterStemmer()
stop_words = set(stopwords.words('english'))

modified_arr = [[porter.stem(i.lower()) for i in tokenize(d.translate(None, string.punctuation)) if i.lower() not in stop_words] for d in documents]

那么这句话将如何帮助我们呢?想象我们有3个袋子: [a, b, c][a, c, a][b, c, d] 。我们可以将它们转换为基于 [a, b, c, d] 向量 。因此,我们最终得到了向量: [1, 1, 1, 0][2, 0, 1, 0][0, 1, 1, 1] 。我们的文档也有类似的情况(只有向量会更长)。现在我们看到我们删除了许多单词,并去除了其他单词以减小向量的维数。这里只是一个有趣的观察。较长的文档将具有更多的正元素而不是较短的元素,这就是为什么最好对向量进行归一化的原因。这称为术语频率TF,人们还使用了有关在其他文档中使用该词的频率的其他信息-反向文档频率IDF。在一起,我们有一个TF-IDF度量标准,它具有两种风格 。这可以用sklearn中的一行来实现:-)

modified_doc = [' '.join(i) for i in modified_arr] # this is only to convert our list of lists to list of strings that vectorizer uses.
tf_idf = TfidfVectorizer().fit_transform(modified_doc)

实际上,矢量化程序允许执行很多操作,例如删除停用词和小写字母。我之所以单独完成它们,是因为sklearn没有非英语停用词,而nltk却没有。

这样我们就计算了所有向量。最后一步是找到与最后一个最相似的一个。有多种方法可以实现这一点,其中一种是欧几里得距离,由于此处讨论的原因,它并不是很大。另一方法是余弦相似度 。我们迭代所有文档,并计算文档和最后一个文档之间的余弦相似度:

l = len(documents) - 1
for i in xrange(l):
    minimum = (1, None)
    minimum = min((cosine(tf_idf[i].todense(), tf_idf[l + 1].todense()), i), minimum)
print minimum

现在,最小值将具有有关最佳文档及其分数的信息。

收藏
评论

这是一项功能,可以将您的测试数据与训练数据进行比较,并使用装有训练数据的Tf-Idf变压器。优点是您可以快速旋转或分组以找到n个最接近的元素,并且计算是按矩阵方式进行的。

def create_tokenizer_score(new_series, train_series, tokenizer):
    """
    return the tf idf score of each possible pairs of documents
    Args:
        new_series (pd.Series): new data (To compare against train data)
        train_series (pd.Series): train data (To fit the tf-idf transformer)
    Returns:
        pd.DataFrame
    """

    train_tfidf = tokenizer.fit_transform(train_series)
    new_tfidf = tokenizer.transform(new_series)
    X = pd.DataFrame(cosine_similarity(new_tfidf, train_tfidf), columns=train_series.index)
    X['ix_new'] = new_series.index
    score = pd.melt(
        X,
        id_vars='ix_new',
        var_name='ix_train',
        value_name='score'
    )
    return score

train_set = pd.Series(["The sky is blue.", "The sun is bright."])
test_set = pd.Series(["The sun in the sky is bright."])
tokenizer = TfidfVectorizer() # initiate here your own tokenizer (TfidfVectorizer, CountVectorizer, with stopwords...)
score = create_tokenizer_score(train_series=train_set, new_series=test_set, tokenizer=tokenizer)
score

   ix_new   ix_train    score
0   0       0       0.617034
1   0       1       0.862012
收藏
评论

首先,如果要提取计数特征并应用TF-IDF归一化和逐行欧几里得归一化,则可以使用TfidfVectorizer在一个操作中TfidfVectorizer

>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> from sklearn.datasets import fetch_20newsgroups
>>> twenty = fetch_20newsgroups()

>>> tfidf = TfidfVectorizer().fit_transform(twenty.data)
>>> tfidf
<11314x130088 sparse matrix of type '<type 'numpy.float64'>'
    with 1787553 stored elements in Compressed Sparse Row format>

现在,要查找一个文档(例如,数据集中的第一个)和所有其他文档的余弦距离,您只需要计算第一个矢量与所有其他矢量的点积,因为tfidf矢量已进行行归一化。

正如克里斯·克拉克(Chris Clark)在评论中所述, 此处的余弦相似度未考虑向量的大小。行归一化的幅度为1,因此线性内核足以计算相似度值。

稀疏的稀疏矩阵API有点奇怪(不像密集的N维numpy数组那样灵活)。要获得第一个向量,您需要按行对矩阵进行切片,以获得具有单行的子矩阵:

>>> tfidf[0:1]
<1x130088 sparse matrix of type '<type 'numpy.float64'>'
    with 89 stored elements in Compressed Sparse Row format>

scikit-learn已经提供了成对度量(机器学习术语中的内核),可用于向量集合的密集表示和稀疏表示。在这种情况下,我们需要一个点积,也称为线性核:

>>> from sklearn.metrics.pairwise import linear_kernel
>>> cosine_similarities = linear_kernel(tfidf[0:1], tfidf).flatten()
>>> cosine_similarities
array([ 1.        ,  0.04405952,  0.11016969, ...,  0.04433602,
    0.04457106,  0.03293218])

因此,要查找前5个相关文档,我们可以使用argsort和一些负数组切片(大多数相关文档的余弦相似度值最高,因此位于排序索引数组的末尾):

>>> related_docs_indices = cosine_similarities.argsort()[:-5:-1]
>>> related_docs_indices
array([    0,   958, 10576,  3277])
>>> cosine_similarities[related_docs_indices]
array([ 1.        ,  0.54967926,  0.32902194,  0.2825788 ])

第一个结果是健全性检查:我们发现查询文档是最相似的文档,其余弦相似性得分为1,其中包含以下文本:

>>> print twenty.data[0]
From: lerxst@wam.umd.edu (where's my thing)
Subject: WHAT car is this!?
Nntp-Posting-Host: rac3.wam.umd.edu
Organization: University of Maryland, College Park
Lines: 15

 I was wondering if anyone out there could enlighten me on this car I saw
the other day. It was a 2-door sports car, looked to be from the late 60s/
early 70s. It was called a Bricklin. The doors were really small. In addition,
the front bumper was separate from the rest of the body. This is
all I know. If anyone can tellme a model name, engine specs, years
of production, where this car is made, history, or whatever info you
have on this funky looking car, please e-mail.

Thanks,
- IL
   ---- brought to you by your neighborhood Lerxst ----

第二个最相似的文档是引用原始消息的回复,因此有很多常用词:

>>> print twenty.data[958]
From: rseymour@reed.edu (Robert Seymour)
Subject: Re: WHAT car is this!?
Article-I.D.: reed.1993Apr21.032905.29286
Reply-To: rseymour@reed.edu
Organization: Reed College, Portland, OR
Lines: 26

In article <1993Apr20.174246.14375@wam.umd.edu> lerxst@wam.umd.edu (where's my
thing) writes:
>
>  I was wondering if anyone out there could enlighten me on this car I saw
> the other day. It was a 2-door sports car, looked to be from the late 60s/
> early 70s. It was called a Bricklin. The doors were really small. In
addition,
> the front bumper was separate from the rest of the body. This is
> all I know. If anyone can tellme a model name, engine specs, years
> of production, where this car is made, history, or whatever info you
> have on this funky looking car, please e-mail.

Bricklins were manufactured in the 70s with engines from Ford. They are rather
odd looking with the encased front bumper. There aren't a lot of them around,
but Hemmings (Motor News) ususally has ten or so listed. Basically, they are a
performance Ford with new styling slapped on top.

>    ---- brought to you by your neighborhood Lerxst ----

Rush fan?

--
Robert Seymour              rseymour@reed.edu
Physics and Philosophy, Reed College    (NeXTmail accepted)
Artificial Life Project         Reed College
Reed Solar Energy Project (SolTrain)    Portland, OR
收藏
评论

这应该对您有帮助。

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity  

tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix = tfidf_vectorizer.fit_transform(train_set)
print tfidf_matrix
cosine = cosine_similarity(tfidf_matrix[length-1], tfidf_matrix)
print cosine

输出将是:

[[ 0.34949812  0.81649658  1.        ]]
收藏
评论
新手导航
  • 社区规范
  • 提出问题
  • 进行投票
  • 个人资料
  • 优化问题
  • 回答问题

关于我们

常见问题

内容许可

联系我们

@2020 AskGo
京ICP备20001863号