使用IDE ,pycharm
一、首先配置好python环境,这个自行安装了,推荐一个python环境安装地址
二、配置scrapy环境,这个在pycharm中的setting 搜索安装就行
因为scrapy依赖了一些其它的库,所以你先得把其它的库安装好才行
按照上图所示从下向上依次安装就可以了
三、创建一个scrapy工程,在pycharm下的terminal里运行命令 scrapy startproject projectName,这里的projectName就是你自己取的工程名字,运行完以后,通过pycharm 找到并打开刚刚创建的projectName这个文件,然后打开setting,安装依次scrapy就行,这次只需要安装scrapy就可以,然后你会看到下图这种箭头那会是红色的,你选择show all添加一个就行,添加的时候默认就行
然后在terminal 运行指令 scrapy genspider dianping dianping.com 这个命令的意思是,创建一个名为dianping的爬虫脚本,这个脚本的搜索域名为dianping.com,到此我们的scrapy框架基本搭建完毕目录结构如下所示
四、然后我打开大众点评分析我们要爬的网站源码
我们需要的数据全在这个div里。
我们需要确定我们要爬的数据,我这里只爬取了商家的名字和星级。为什么只做爬了这两个数据,最后在说。
首先在items.py中定义爬虫最终需要爬取哪些想
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class DianpingItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
fitness_name = scrapy.Field() #商家名字
fitness_start = scrapy.Field() #商家星级
# fitness_type = scrapy.Field()
# fitness_addr1 = scrapy.Field()
# fitness_addr2 = scrapy.Field()
然后pipelines.py文件的作用是扫尾,也就是对你爬取的数据做什么处理,比如我这是保存在本地的txt文件里
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import codecs
class DianpingPipeline(object):
def process_item(self, item, spider):
fileName = '重庆健身房.txt'
with codecs.open(fileName, 'a+', 'utf-8') as fp:
fp.write('%s %s \r\n' % (item['fitness_name'], item['fitness_start']))
然后在我们创建的爬虫文件里去定义怎么爬取 我这里是ChongqingspiderSpider.py
# -*- coding: utf-8 -*-
import scrapy
from dianping.items import DianpingItem
class ChongqingspiderSpider(scrapy.Spider):
name = 'chongQingSpider'
allowed_domains = ['dianping.com']
offset = 1
url = 'https://www.dianping.com/search/keyword/9/0_%E5%81%A5%E8%BA%AB%E6%88%BF/p'
start_urls = [url+str(offset)]
def parse(self, response):
for each in response.xpath("//div[@class='shop-list J_shop-list shop-all-list']/ul/li"):
item = DianpingItem()
# 商家名字
item['fitness_name'] = each.xpath(".//img/@title").extract()[0]
#商家星数
item['fitness_start'] = each.xpath(".//div[@class='comment']/span/@title").extract()[0]
# 商店类型 和 地址,防止地址1不存在,需要判断
# at_tag = 0
# for at in each.xpath(".//div[@class='tag-addr']"):
# for att in at.xpath(".//a/span[@class='tag']/text()"):
# if at_tag == 0:
# item['fitness_type'] = at.xpath(".//a/span[@class='tag']/text()").extract()[0]
# at_tag += 1
# elif at_tag == 1:
# item['fitness_addr1'] = at.xpath(".//a/span[@class='tag']/text()").extract()[1]
# at_tag += 1
#
# # 地址2
# item['fitness_addr2'] = each.xpath(".//div[@class='tag-addr']/span[@class='addr']/text()").extract()[0]
yield item
if self.offset < 50:
self.offset += 1
#
# # 每次处理完一页的数据之后,重新发送下一页页面请求
# # self.offset自增1,同时拼接为新的url,并调用回调函数self.parse处理Response
yield scrapy.Request(self.url + str(self.offset), callback=self.parse)
到这你的爬虫基本写完了,但是好不能运行然后在setting文件中还得配置
# -*- coding: utf-8 -*-
# Scrapy settings for dianping project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'dianping'
SPIDER_MODULES = ['dianping.spiders']
NEWSPIDER_MODULE = 'dianping.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'dianping.middlewares.DianpingSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'dianping.middlewares.DianpingDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'dianping.pipelines.DianpingPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
'''
防止403崩溃。
'''
HTTPERROR_ALLOWED_CODES = [403]
其中下面这个必须要。其他的配置都是些模拟用户什么的。如果用户有什么特殊的配置的都是在这个文件配置
ITEM_PIPELINES = {
'dianping.pipelines.DianpingPipeline': 300,
}
这句话的意思也就是,它告诉scrapy最终的结果由dianping模块的pipylines模块的DianpingPipeline类来处理的。
最后在terminal中,scrapy.cfg同级目录下运行scrapy crawl chongQingSpider 就行了。
如果运行的时候遇见什么 win32的错的时候,先执行下,pip install pypiwin32 就可以了。
最后解释下为什么我只爬取了名字和星级这两个数据。因为大众点评做了反爬机制。比如商家类型。你在看他源码的时候,这里显示的并不是存文本,如下图所示。他这里是通过定位来显示数据的,这个比较麻烦我就没弄。