python - 使用 scrapy 抓取美女图片实录

问题导读:

1.环境搭建

2.创建scrapy 工程

3.代码

4.启动


解决方案:


环境搭建

Windows 7 x64 + python 2.7.5 scrapy 1.1.0

  • 下载所需要的安装包:https://yunpan.cn/cSBKpV7ufBQ6p 访问密码 9404

python - 使用 scrapy 抓取美女图片实录_第1张图片

  • 按顺序安装:
(1)python-2.7.5.amd64.msi

双击安装 -> 将路径添加到环境变量中 
;C:\Python27;C:\Python27\Scripts
(2)lxml-3.3.1.win-amd64-py2.7.exe
双击安装 - > 测试 -> 若没有出现错误提示择安装成功
python - 使用 scrapy 抓取美女图片实录_第2张图片

(3)zope.interface-4.1.0.win-amd64-py2.7.exe
(4)Twisted-16.2.0-cp27-none-win_amd64.whl
  1. 进入终端
  2. cd 到 安装包所在的目录
  3. pip install Twisted-16.2.0-cp27-none-win_amd64.whl


(5)pyOpenSSL-16.0.0-py2.py3-none-any.whl
(6)pywin32-218.win-amd64-py2.7.exe

  • 安装scrapy
pip install scrapy

python - 使用 scrapy 抓取美女图片实录_第3张图片

  • 安装sublime_text 文本编辑器:
python - 使用 scrapy 抓取美女图片实录_第4张图片


{
	"color_scheme": "Packages/Color Scheme - Default/iPlastic.tmTheme",
	"font_face": "consolas",
	"font_size": 14.0
}



创建scrapy 项目


  • 创建项目
(1)进入控制台 cd 到要创建工程的目录下面
(2)scrapy startproject jiandan 
(3)创建完,找到jiandan 这个文件夹,进入
python - 使用 scrapy 抓取美女图片实录_第5张图片

(4)在spiders 中创建jiandanSpider.py - > 我们的爬虫


代码

  • jiandanSpider:
#coding:utf-8
import scrapy
from jiandan.items import JiandanItem

from scrapy.crawler import CrawlerProcess

class jiandanSpider(scrapy.Spider):
	name = 'jiandan'
	allowed_domains = []
	start_urls = ["http://jandan.net/ooxx"]

	def parse(self,response):
		item = JiandanItem()
		item['image_urls'] = response.xpath('//img//@src').extract()#提取图片链接
		print 'image_urls',item['image_urls']
		yield item
		new_url = response.xpath('//a[@class="previous-comment-page"]//@href').extract_first()#翻页
		print 'new_url',new_url
		if new_url:
			yield scrapy.Request(new_url,callback=self.parse)

		

  • items:
# -*- coding: utf-8 -*-

import scrapy

class JiandanItem(scrapy.Item):
    # define the fields for your item here like:
	image_urls = scrapy.Field()#图片的链接

  • pipelines:
# -*- coding: utf-8 -*-

import os
import urllib

from jiandan import settings

class JiandanPipeline(object):

	def process_item(self, item, spider):
		dir_path = '%s/%s'%(settings.IMAGES_STORE,spider.name)#存储路径
		print 'dir_path',dir_path
		if not os.path.exists(dir_path):
			os.makedirs(dir_path)
		for image_url in item['image_urls']:
			list_name = image_url.split('/')
			file_name = list_name[len(list_name)-1]#图片名称
			print 'filename',file_name
			file_path = '%s/%s'%(dir_path,file_name)
			print 'filepath', file_path
			if os.path.exists(file_name):
				continue
			with open(file_path,'wb') as file_writer:
				conn = urllib.urlopen(image_url)#下载图片
				file_writer.write(conn.read())
			file_writer.close()
		return item

  • settings:
# -*- coding: utf-8 -*-

# Scrapy settings for jiandan project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'jiandan'

SPIDER_MODULES = ['jiandan.spiders']
NEWSPIDER_MODULE = 'jiandan.spiders'


ITEM_PIPELINES = {
   'jiandan.pipelines.JiandanPipeline': 1,

}
# ITEM_PIPELINES = {'jiandan.pipelines.ImagesPipeline': 1}
IMAGES_STORE='F:\\jiandan01'
DOWNLOAD_DELAY = 0.25
IMAGES_THUMBS = {#缩略图的尺寸,设置这个值就会产生缩略图
    'small': (50, 50),
    'big': (200, 200),
}

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0'

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS=32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY=3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN=16
#CONCURRENT_REQUESTS_PER_IP=16

# Disable cookies (enabled by default)
#COOKIES_ENABLED=False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED=False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
  'Referer':'http://jandan.net/ooxx'
}


# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'jiandan.middlewares.MyCustomSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'jiandan.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
# NOTE: AutoThrottle will honour the standard settings for concurrency and delay
#AUTOTHROTTLE_ENABLED=True
# The initial download delay
#AUTOTHROTTLE_START_DELAY=5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY=60
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG=False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED=True
#HTTPCACHE_EXPIRATION_SECS=0
#HTTPCACHE_DIR='httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES=[]
#HTTPCACHE_STORAGE='scrapy.extensions.httpcache.FilesystemCacheStorage'

启动爬虫


scrapy crawl jiandan
python - 使用 scrapy 抓取美女图片实录_第6张图片

python - 使用 scrapy 抓取美女图片实录_第7张图片

python - 使用 scrapy 抓取美女图片实录_第8张图片

你可能感兴趣的:(Python,数据采集)