引言
写这个小爬虫主要是为了爬校园论坛上的实习信息,主要采用了Requests库
源码
URLs.py
主要功能是根据一个初始url(包含page页面参数)来获得page页面从当前页面数到pageNum的url列表
import re def getURLs(url, attr, pageNum=1): all_links = [] try: now_page_number = int(re.search(attr+'=(\d+)', url, re.S).group(1)) for i in range(now_page_number, pageNum + 1): new_url = re.sub(attr+'=\d+', attr+'=%s' % i, url, re.S) all_links.append(new_url) return all_links except TypeError: print "arguments TypeError:attr should be string."
uni_2_native.py
由于论坛上爬取得到的网页上的中文都是unicode编码的形式,文本格式都为 &#XXXX;的形式,所以在爬得网站内容后还需要对其进行转换
import sys import re reload(sys) sys.setdefaultencoding('utf-8') def get_native(raw): tostring = raw while True: obj = re.search('&#(.*"htmlcode"># -*- coding: utf-8 -*- import MySQLdb class saveSqlite(): def __init__(self): self.infoList = [] def saveSingle(self, author=None, title=None, date=None, url=None,reply=0, view=0): if author is None or title is None or date is None or url is None: print "No info saved!" else: singleDict = {} singleDict['author'] = author singleDict['title'] = title singleDict['date'] = date singleDict['url'] = url singleDict['reply'] = reply singleDict['view'] = view self.infoList.append(singleDict) def toMySQL(self): conn = MySQLdb.connect(host='localhost', user='root', passwd='', port=3306, db='db_name', charset='utf8') cursor = conn.cursor() # sql = "select * from info" # n = cursor.execute(sql) # for row in cursor.fetchall(): # for r in row: # print r # print '\n' sql = "delete from info" cursor.execute(sql) conn.commit() sql = "insert into info(title,author,url,date,reply,view) values (%s,%s,%s,%s,%s,%s)" params = [] for each in self.infoList: params.append((each['title'], each['author'], each['url'], each['date'], each['reply'], each['view'])) cursor.executemany(sql, params) conn.commit() cursor.close() conn.close() def show(self): for each in self.infoList: print "author: "+each['author'] print "title: "+each['title'] print "date: "+each['date'] print "url: "+each['url'] print "reply: "+str(each['reply']) print "view: "+str(each['view']) print '\n' if __name__ == '__main__': save = saveSqlite() save.saveSingle('网','aaa','2008-10-10 10:10:10','www.baidu.com',1,1) # save.show() save.toMySQL()主要爬虫代码
import requests from lxml import etree from cc98 import uni_2_native, URLs, saveInfo # 根据自己所需要爬的网站,伪造一个header headers ={ 'Accept': '', 'Accept-Encoding': '', 'Accept-Language': '', 'Connection': '', 'Cookie': '', 'Host': '', 'Referer': '', 'Upgrade-Insecure-Requests': '', 'User-Agent': '' } url = 'http://www.cc98.org/list.asp"get infomation from cc98..." urls = URLs.getURLs(url, "page", 50) savetools = saveInfo.saveSqlite() for url in urls: r = requests.get(url, headers=headers) html = uni_2_native.get_native(r.text) selector = etree.HTML(html) content_tr_list = selector.xpath('//form/table[@class="tableborder1 list-topic-table"]/tbody/tr') for each in content_tr_list: href = each.xpath('./td[2]/a/@href') if len(href) == 0: continue else: # print len(href) # not very well using for, though just one element in list # but I don't know why I cannot get the data by index for each_href in href: link = cc98 + each_href title_author_time = each.xpath('./td[2]/a/@title') # print len(title_author_time) for info in title_author_time: info_split = info.split('\n') title = info_split[0][1:len(info_split[0])-1] author = info_split[1][3:] date = info_split[2][3:] hot = each.xpath('./td[4]/text()') # print len(hot) for hot_num in hot: reply_view = hot_num.strip().split('/') reply, view = reply_view[0], reply_view[1] savetools.saveSingle(author=author, title=title, date=date, url=link, reply=reply, view=view) print "All got! Now saving to Database..." # savetools.show() savetools.toMySQL() print "ALL CLEAR! Have Fun!"以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
暂无评论...
稳了!魔兽国服回归的3条重磅消息!官宣时间再确认!
昨天有一位朋友在大神群里分享,自己亚服账号被封号之后居然弹出了国服的封号信息对话框。
这里面让他访问的是一个国服的战网网址,com.cn和后面的zh都非常明白地表明这就是国服战网。
而他在复制这个网址并且进行登录之后,确实是网易的网址,也就是我们熟悉的停服之后国服发布的暴雪游戏产品运营到期开放退款的说明。这是一件比较奇怪的事情,因为以前都没有出现这样的情况,现在突然提示跳转到国服战网的网址,是不是说明了简体中文客户端已经开始进行更新了呢?
更新日志
2024年11月26日
2024年11月26日
- 凤飞飞《我们的主题曲》飞跃制作[正版原抓WAV+CUE]
- 刘嘉亮《亮情歌2》[WAV+CUE][1G]
- 红馆40·谭咏麟《歌者恋歌浓情30年演唱会》3CD[低速原抓WAV+CUE][1.8G]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[320K/MP3][193.25MB]
- 【轻音乐】曼托凡尼乐团《精选辑》2CD.1998[FLAC+CUE整轨]
- 邝美云《心中有爱》1989年香港DMIJP版1MTO东芝首版[WAV+CUE]
- 群星《情叹-发烧女声DSD》天籁女声发烧碟[WAV+CUE]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[FLAC/分轨][748.03MB]
- 理想混蛋《Origin Sessions》[320K/MP3][37.47MB]
- 公馆青少年《我其实一点都不酷》[320K/MP3][78.78MB]
- 群星《情叹-发烧男声DSD》最值得珍藏的完美男声[WAV+CUE]
- 群星《国韵飘香·贵妃醉酒HQCD黑胶王》2CD[WAV]
- 卫兰《DAUGHTER》【低速原抓WAV+CUE】
- 公馆青少年《我其实一点都不酷》[FLAC/分轨][398.22MB]
- ZWEI《迟暮的花 (Explicit)》[320K/MP3][57.16MB]