关联规则挖掘(Association rule mining)是数据挖掘中最活跃的研究方法之一,可以用来发现事情之间的联系,最早是为了发现超市交易数据库中不同的商品之间的关系。(啤酒与尿布)
基本概念
1、支持度的定义:support(X-->Y) = |X交Y|/N=集合X与集合Y中的项在一条记录中同时出现的次数/数据记录的个数。例如:support({啤酒}-->{尿布}) = 啤酒和尿布同时出现的次数/数据记录数 = 3/5=60%。
2、自信度的定义:confidence(X-->Y) = |X交Y|/|X| = 集合X与集合Y中的项在一条记录中同时出现的次数/集合X出现的个数 。例如:confidence({啤酒}-->{尿布}) = 啤酒和尿布同时出现的次数/啤酒出现的次数=3/3=100%;confidence({尿布}-->{啤酒}) = 啤酒和尿布同时出现的次数/尿布出现的次数 = 3/4 = 75%
同时满足最小支持度阈值(min_sup)和最小置信度阈值(min_conf)的规则称作强规则 ,如果项集满足最小支持度,则称它为频繁项集
“如何由大型数据库挖掘关联规则"//img.jbzj.com/file_images/article/201411/2014117113744567.jpg" alt="" />
上面的图演示了Apriori算法的过程,注意看由二级频繁项集生成三级候选项集时,没有{牛奶,面包,啤酒},那是因为{面包,啤酒}不是二级频繁项集,这里利用了Apriori定理。最后生成三级频繁项集后,没有更高一级的候选项集,因此整个算法结束,{牛奶,面包,尿布}是最大频繁子集。
Python实现代码:
复制代码 代码如下:
Skip to content
Sign up Sign in This repository
Explore
Features
Enterprise
Blog
Star 0 Fork 0 taizilongxu/datamining
branch: master datamining / apriori / apriori.py
hackerxutaizilongxu 20 days ago backup
1 contributor
156 lines (140 sloc) 6.302 kb RawBlameHistory
#-*- encoding: UTF-8 -*-
#---------------------------------import------------------------------------
#---------------------------------------------------------------------------
class Apriori(object):
def __init__(self, filename, min_support, item_start, item_end):
self.filename = filename
self.min_support = min_support # 最小支持度
self.min_confidence = 50
self.line_num = 0 # item的行数
self.item_start = item_start # 取哪行的item
self.item_end = item_end
self.location = [[i] for i in range(self.item_end - self.item_start + 1)]
self.support = self.sut(self.location)
self.num = list(sorted(set([j for i in self.location for j in i])))# 记录item
self.pre_support = [] # 保存前一个support,location,num
self.pre_location = []
self.pre_num = []
self.item_name = [] # 项目名
self.find_item_name()
self.loop()
self.confidence_sup()
def deal_line(self, line):
"提取出需要的项"
return [i.strip() for i in line.split(' ') if i][self.item_start - 1:self.item_end]
def find_item_name(self):
"根据第一行抽取item_name"
with open(self.filename, 'r') as F:
for index,line in enumerate(F.readlines()):
if index == 0:
self.item_name = self.deal_line(line)
break
def sut(self, location):
"""
输入[[1,2,3],[2,3,4],[1,3,5]...]
输出每个位置集的support [123,435,234...]
"""
with open(self.filename, 'r') as F:
support = [0] * len(location)
for index,line in enumerate(F.readlines()):
if index == 0: continue
# 提取每信息
item_line = self.deal_line(line)
for index_num,i in enumerate(location):
flag = 0
for j in i:
if item_line[j] != 'T':
flag = 1
break
if not flag:
support[index_num] += 1
self.line_num = index # 一共多少行,出去第一行的item_name
return support
def select(self, c):
"返回位置"
stack = []
for i in self.location:
for j in self.num:
if j in i:
if len(i) == c:
stack.append(i)
else:
stack.append([j] + i)
# 多重列表去重
import itertools
s = sorted([sorted(i) for i in stack])
location = list(s for s,_ in itertools.groupby(s))
return location
def del_location(self, support, location):
"清除不满足条件的候选集"
# 小于最小支持度的剔除
for index,i in enumerate(support):
if i < self.line_num * self.min_support / 100:
support[index] = 0
# apriori第二条规则,剔除
for index,j in enumerate(location):
sub_location = [j[:index_loc] + j[index_loc+1:]for index_loc in range(len(j))]
flag = 0
for k in sub_location:
if k not in self.location:
flag = 1
break
if flag:
support[index] = 0
# 删除没用的位置
location = [i for i,j in zip(location,support) if j != 0]
support = [i for i in support if i != 0]
return support, location
def loop(self):
"s级频繁项级的迭代"
s = 2
while True:
print '-'*80
print 'The' ,s - 1,'loop'
print 'location' , self.location
print 'support' , self.support
print 'num' , self.num
print '-'*80
# 生成下一级候选集
location = self.select(s)
support = self.sut(location)
support, location = self.del_location(support, location)
num = list(sorted(set([j for i in location for j in i])))
s += 1
if location and support and num:
self.pre_num = self.num
self.pre_location = self.location
self.pre_support = self.support
self.num = num
self.location = location
self.support = support
else:
break
def confidence_sup(self):
"计算confidence"
if sum(self.pre_support) == 0:
print 'min_support error' # 第一次迭代即失败
else:
for index_location,each_location in enumerate(self.location):
del_num = [each_location[:index] + each_location[index+1:] for index in range(len(each_location))] # 生成上一级频繁项级
del_num = [i for i in del_num if i in self.pre_location] # 删除不存在上一级频繁项级子集
del_support = [self.pre_support[self.pre_location.index(i)] for i in del_num if i in self.pre_location] # 从上一级支持度查找
# print del_num
# print self.support[index_location]
# print del_support
for index,i in enumerate(del_num): # 计算每个关联规则支持度和自信度
index_support = 0
if len(self.support) != 1:
index_support = index
support = float(self.support[index_location])/self.line_num * 100 # 支持度
s = [j for index_item,j in enumerate(self.item_name) if index_item in i]
if del_support[index]:
confidence = float(self.support[index_location])/del_support[index] * 100
if confidence > self.min_confidence:
print ','.join(s) , '-' , self.item_name[each_location[index]] , ' min_support: ' , str(support) + '%' , ' min_confidence:' , str(confidence) + '%'
def main():
c = Apriori('basket.txt', 14, 3, 13)
d = Apriori('simple.txt', 50, 2, 6)
if __name__ == '__main__':
main()
############################################################################
Status API Training Shop Blog About
© 2014 GitHub, Inc. Terms Privacy Security Contact
Apriori算法
Apriori(filename, min_support, item_start, item_end)
参数说明
filename:(路径)文件名
min_support:最小支持度
item_start:item起始位置
item_end:item结束位置
使用例子:
复制代码 代码如下:
import apriori
c = apriori.Apriori('basket.txt', 11, 3, 13)
输出:
复制代码 代码如下:
--------------------------------------------------------------------------------
The 1 loop
location [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10]]
support [299, 183, 177, 303, 204, 302, 293, 287, 184, 292, 276]
num [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
The 2 loop
location [[0, 9], [3, 5], [3, 6], [5, 6], [7, 10]]
support [145, 173, 167, 170, 144]
num [0, 3, 5, 6, 7, 9, 10]
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
The 3 loop
location [[3, 5, 6]]
support [146]
num [3, 5, 6]
--------------------------------------------------------------------------------
frozenmeal,beer - cannedveg min_support: 14.6% min_confidence: 0.858823529412
cannedveg,beer - frozenmeal min_support: 14.6% min_confidence: 0.874251497006
cannedveg,frozenmeal - beer min_support: 14.6% min_confidence: 0.843930635838
--------------------------------------------------------------------------------
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
《魔兽世界》大逃杀!60人新游玩模式《强袭风暴》3月21日上线
暴雪近日发布了《魔兽世界》10.2.6 更新内容,新游玩模式《强袭风暴》即将于3月21 日在亚服上线,届时玩家将前往阿拉希高地展开一场 60 人大逃杀对战。
艾泽拉斯的冒险者已经征服了艾泽拉斯的大地及遥远的彼岸。他们在对抗世界上最致命的敌人时展现出过人的手腕,并且成功阻止终结宇宙等级的威胁。当他们在为即将于《魔兽世界》资料片《地心之战》中来袭的萨拉塔斯势力做战斗准备时,他们还需要在熟悉的阿拉希高地面对一个全新的敌人──那就是彼此。在《巨龙崛起》10.2.6 更新的《强袭风暴》中,玩家将会进入一个全新的海盗主题大逃杀式限时活动,其中包含极高的风险和史诗级的奖励。
《强袭风暴》不是普通的战场,作为一个独立于主游戏之外的活动,玩家可以用大逃杀的风格来体验《魔兽世界》,不分职业、不分装备(除了你在赛局中捡到的),光是技巧和战略的强弱之分就能决定出谁才是能坚持到最后的赢家。本次活动将会开放单人和双人模式,玩家在加入海盗主题的预赛大厅区域前,可以从强袭风暴角色画面新增好友。游玩游戏将可以累计名望轨迹,《巨龙崛起》和《魔兽世界:巫妖王之怒 经典版》的玩家都可以获得奖励。
更新日志
- 凤飞飞《我们的主题曲》飞跃制作[正版原抓WAV+CUE]
- 刘嘉亮《亮情歌2》[WAV+CUE][1G]
- 红馆40·谭咏麟《歌者恋歌浓情30年演唱会》3CD[低速原抓WAV+CUE][1.8G]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[320K/MP3][193.25MB]
- 【轻音乐】曼托凡尼乐团《精选辑》2CD.1998[FLAC+CUE整轨]
- 邝美云《心中有爱》1989年香港DMIJP版1MTO东芝首版[WAV+CUE]
- 群星《情叹-发烧女声DSD》天籁女声发烧碟[WAV+CUE]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[FLAC/分轨][748.03MB]
- 理想混蛋《Origin Sessions》[320K/MP3][37.47MB]
- 公馆青少年《我其实一点都不酷》[320K/MP3][78.78MB]
- 群星《情叹-发烧男声DSD》最值得珍藏的完美男声[WAV+CUE]
- 群星《国韵飘香·贵妃醉酒HQCD黑胶王》2CD[WAV]
- 卫兰《DAUGHTER》【低速原抓WAV+CUE】
- 公馆青少年《我其实一点都不酷》[FLAC/分轨][398.22MB]
- ZWEI《迟暮的花 (Explicit)》[320K/MP3][57.16MB]