我们今天来用Python爬虫爬取金庸所有的武侠小说,网址为:http://jinyong.zuopinj.com/,网页如下:
完整的Python代码如下:
# -*- coding: utf-8 -*-
import urllib.request
from bs4 import BeautifulSoup
#获取每本书的章节内容
def get_chapter(url):
# 获取网页的源代码
html = urllib.request.urlopen(url)
content = html.read().decode('utf8')
html.close()
# 将网页源代码解析成HTML格式
soup = BeautifulSoup(content, "lxml")
title = soup.find('h1').text #获取章节的标题
text = soup.find('div', id='htmlContent') #获取章节的内容
#处理章节的内容,使得格式更加整洁、清晰
content = text.get_text('\n','br/').replace('\n', '\n ')
content = content.replace(' ', '\n ')
return title, ' '+content
def main():
# 书本列表
books = ['射雕英雄传','天龙八部','鹿鼎记','神雕侠侣','笑傲江湖','碧血剑','倚天屠龙记',\
'飞狐外传','书剑恩仇录','连城诀','侠客行','越女剑','鸳鸯刀','白马啸西风',\
'雪山飞狐']
order = [1,2,3,4,5,6,7,8,10,11,12,14,15,13,9] #order of books to scrapy
#list to store each book's scrapying range
page_range = [1,43,94,145,185,225,248,289,309,329,341,362,363,364,375,385]
for i,book in enumerate(books):
for num in range(page_range[i],page_range[i+1]):
url = "http://jinyong.zuopinj.com/%s/%s.html"%(order[i],num)
# 错误处理机制
try:
title, chapter = get_chapter(url)
with open('E://%s.txt'%book, 'a', encoding='gb18030') as f:
print(book+':'+title+'-->写入成功!')
f.write(title+'\n\n\n')
f.write(chapter+'\n\n\n')
except Exception as e:
print(e)
print('全部写入完毕!')
main()
运行结果如下:
上面的运行结果“HTTP Error 404: Not Found”是因为这个网页不存在,并不影响书本内容的完整性。我们可以去E盘查看文件是否下载成功:
· 15本书都下载完毕了!整个过程才用了不到10分钟!爬虫的力量真是伟大啊~~