下面就是使用Python爬虫库BeautifulSoup对文档树进行遍历并对标签进行操作的实例,都是最基础的内容
html_doc = """ <html><head><title>The Dormouse's story</title></head> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ from bs4 import BeautifulSoup soup = BeautifulSoup(html_doc,'lxml')
一、子节点
一个Tag可能包含多个字符串或者其他Tag,这些都是这个Tag的子节点.BeautifulSoup提供了许多操作和遍历子结点的属性。
1.通过Tag的名字来获得Tag
print(soup.head) print(soup.title)
<head><title>The Dormouse's story</title></head> <title>The Dormouse's story</title>
通过名字的方法只能获得第一个Tag,如果要获得所有的某种Tag可以使用find_all方法
soup.find_all('a')
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
2.contents属性:将Tag的子节点通过列表的方式返回
head_tag = soup.head head_tag.contents
[<title>The Dormouse's story</title>]
title_tag = head_tag.contents[0] title_tag
<title>The Dormouse's story</title>
title_tag.contents
["The Dormouse's story"]
3.children:通过该属性对子节点进行循环
for child in title_tag.children: print(child)
The Dormouse's story
4.descendants: 不论是contents还是children都是返回直接子节点,而descendants对所有tag的子孙节点进行递归循环
for child in head_tag.children: print(child)
<title>The Dormouse's story</title>
for child in head_tag.descendants: print(child)
<title>The Dormouse's story</title> The Dormouse's story
5.string 如果tag只有一个NavigableString类型的子节点,那么tag可以使用.string得到该子节点
title_tag.string
"The Dormouse's story"
如果一个tag只有一个子节点,那么使用.string可以获得其唯一子结点的NavigableString.
head_tag.string
"The Dormouse's story"
如果tag有多个子节点,tag无法确定.string对应的是那个子结点的内容,故返回None
print(soup.html.string)
None
6.strings和stripped_strings
如果tag包含多个字符串,可以使用.strings循环获取
for string in soup.strings: print(string)
The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...
.string输出的内容包含了许多空格和空行,使用strpped_strings去除这些空白内容
for string in soup.stripped_strings: print(string)
The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...
二、父节点
1.parent:获得某个元素的父节点
title_tag = soup.title title_tag.parent
<head><title>The Dormouse's story</title></head>
字符串也有父节点
title_tag.string.parent
<title>The Dormouse's story</title>
2.parents:递归的获得所有父辈节点
link = soup.a for parent in link.parents: if parent is None: print(parent) else: print(parent.name)
p body html [document]
三、兄弟结点
sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>",'lxml') print(sibling_soup.prettify())
<html> <body> <a> <b> text1 </b> <c> text2 </c> </a> </body> </html>
1.next_sibling和previous_sibling
sibling_soup.b.next_sibling
<c>text2</c>
sibling_soup.c.previous_sibling
<b>text1</b>
在实际文档中.next_sibling和previous_sibling通常是字符串或者空白符
soup.find_all('a')
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
soup.a.next_sibling # 第一个<a></a>的next_sibling是,\n
',\n'
soup.a.next_sibling.next_sibling
<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
2.next_siblings和previous_siblings
for sibling in soup.a.next_siblings: print(repr(sibling))
',\n' <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a> ' and\n' <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a> ';\nand they lived at the bottom of a well.'
for sibling in soup.find(id="link3").previous_siblings: print(repr(sibling))
' and\n' <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a> ',\n' <a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a> 'Once upon a time there were three little sisters; and their names were\n'
四、回退与前进
1.next_element和previous_element
指向下一个或者前一个被解析的对象(字符串或tag),即深度优先遍历的后序节点和前序节点
last_a_tag = soup.find("a", id="link3") print(last_a_tag.next_sibling) print(last_a_tag.next_element)
; and they lived at the bottom of a well. Tillie
last_a_tag.previous_element
' and\n'
2.next_elements和previous_elements
通过.next_elements和previous_elements可以向前或向后访问文档的解析内容,就好像文档正在被解析一样
for element in last_a_tag.next_elements: print(repr(element))
'Tillie' ';\nand they lived at the bottom of a well.' '\n' <p class="story">...</p> '...' '\n'
更多关于使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作的方法与文章大家可以点击下面的相关文章
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
更新日志
- 凤飞飞《我们的主题曲》飞跃制作[正版原抓WAV+CUE]
- 刘嘉亮《亮情歌2》[WAV+CUE][1G]
- 红馆40·谭咏麟《歌者恋歌浓情30年演唱会》3CD[低速原抓WAV+CUE][1.8G]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[320K/MP3][193.25MB]
- 【轻音乐】曼托凡尼乐团《精选辑》2CD.1998[FLAC+CUE整轨]
- 邝美云《心中有爱》1989年香港DMIJP版1MTO东芝首版[WAV+CUE]
- 群星《情叹-发烧女声DSD》天籁女声发烧碟[WAV+CUE]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[FLAC/分轨][748.03MB]
- 理想混蛋《Origin Sessions》[320K/MP3][37.47MB]
- 公馆青少年《我其实一点都不酷》[320K/MP3][78.78MB]
- 群星《情叹-发烧男声DSD》最值得珍藏的完美男声[WAV+CUE]
- 群星《国韵飘香·贵妃醉酒HQCD黑胶王》2CD[WAV]
- 卫兰《DAUGHTER》【低速原抓WAV+CUE】
- 公馆青少年《我其实一点都不酷》[FLAC/分轨][398.22MB]
- ZWEI《迟暮的花 (Explicit)》[320K/MP3][57.16MB]