Python爬虫实例1
爬取猫眼电影TOP100(http://maoyan.com/board/4)的相关内容
step1 准备工作
目标:
爬取猫眼电影TOP100的电影名称、时间、评分、图片
分析:
第一页URL:https://maoyan.com/board/4,展示了排行1-10的电影;
第二页URL:https://maoyan.com/board/4?offset=10,展示了排行10-20的电影;
…
获取TOP100,需要分开请求10次,参数offset分别为:0,10…90
step2 获取数据
1.爬取第一页的源代码
import requests
def get_one_page(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36'
}
response = requests.get(url=url, headers=headers)
html = response.text
return html
def main():
url = "https://maoyan.com/board/4"
html = get_one_page(url)
print(html)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
2.正则提取信息
每个电影对应一个dd节点
<dd>
<i class="board-index board-index-1">1</i>
<a href="/films/1200486" title="我不是药神" class="image-link" data-act="boarditem-click" data-val="{movieId:1200486}">
<img src="//s3plus.meituan.net/v1/mss_e2821d7f0cfe4ac1bf9202ecf9590e67/cdn-prod/file:5788b470/image/loading_2.e3d934bf.png" alt="" class="poster-default" />
<img data-src="https://p0.meituan.net/movie/414176cfa3fea8bed9b579e9f42766b9686649.jpg@160w_220h_1e_1c" alt="我不是药神" class="board-img" />`在这里插入代码片`
</a>
<div class="board-item-main">
<div class="board-item-content">
<div class="movie-item-info">
<p class="name"><a href="/films/1200486" title="我不是药神" data-act="boarditem-click" data-val="{movieId:1200486}">我不是药神</a></p>
<p class="star">
主演:徐峥,周一围,王传君
</p>
<p class="releasetime">上映时间:2018-07-05</p> </div>
<div class="movie-item-number score-num">
<p class="score"><i class="integer">9.</i><i class="fraction">6</i></p>
</div>
</div>
</div>
</dd>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
爬取信息的正则表达式:
<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>
- 1
括号内匹配了7个信息:排名、图片、名称、主演、上映时间、评分整数部分、评分小数部分
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
+ '.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>'
+ '.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>', re.S)
items = re.findall(pattern, html)
return items
- 1
- 2
- 3
- 4
- 5
- 6
3.处理得到的数据
数据比较杂乱,我们要对数据进行处理。
我们遍历提取结果生成字典,形成结构化数据
需要使用两个函数:
(1)yield:通俗理解,yield就是 return 返回一个值,并且记住这个返回的位置,下次迭代就从这个位置后开始
(2)strip:去掉字符串两边的空格
更新代码
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
+ '.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>'
+ '.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>', re.S)
items = re.findall(pattern, html)
for item in items:
yield{
'index': item[0],
'image': item[1],
'title': item[2],
'actor': item[3].strip()[3:],
'time': item[4].strip()[5:],
'score': item[5]+item[6]
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
4.分页爬取
给链接传入参数offset即可
def main(offset):
url = "https://maoyan.com/board/4?offest="+str(offset)
html = get_one_page(url)
for item in parse_one_page(html):
print(item)
if __name__ == '__main__':
for i in range(10):
main(offset=i*10)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
step3 写入文件
def write_to_file(item):
with open('result.txt', 'a', encoding='utf-8') as f:
print(type(json.dumps(item)))
f.write(json.dumps(item, ensure_ascii=False)+'\n')
- 1
- 2
- 3
- 4
step4 运行结果
1.完整代码:
import json
import requests
from requests.exceptions import RequestException
import re
import time
def get_one_page(url):
try:
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36'
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.text
return None
except RequestException:
return None
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
+ '.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>'
+ '.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>', re.S)
items = re.findall(pattern, html)
for item in items:
yield {
'index': item[0],
'image': item[1],
'title': item[2],
'actor': item[3].strip()[3:],
'time': item[4].strip()[5:],
'score': item[5] + item[6]
}
def write_to_file(content):
with open('result.txt', 'a', encoding='utf-8') as f:
f.write(json.dumps(content, ensure_ascii=False) + '\n')
def main(offset):
url = 'http://maoyan.com/board/4?offset=' + str(offset)
html = get_one_page(url)
for item in parse_one_page(html):
print(item)
write_to_file(item)
if __name__ == '__main__':
for i in range(10):
main(offset=i * 10)
time.sleep(1)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
2.txt文件