scrapy爬虫实战

scrapy爬虫实战

  • Scrapy 简介
    • 主要特性
    • 示例代码
  • 安装scrapy,并创建项目
  • 运行单个脚本
    • 代码示例
      • 配置
        • item
        • setting
      • 爬虫脚本
    • 代码解析
      • xpath基本语法:
      • 路径表达式示例:
      • 通配符和多路径:
      • 函数:
      • 示例:
  • 批量运行
  • 附录1,持久化存入数据库
  • 附录2,如何在本地启动数据库

Scrapy 简介

Scrapy 是一个强大的开源网络爬虫框架,用于从网站上提取数据。它以可扩展性和灵活性为特点,被广泛应用于数据挖掘、信息处理和历史数据抓取等领域。官网链接(外)

主要特性

  1. 模块化结构:Scrapy 的设计采用了模块化结构,包括引擎、调度器、下载器、爬虫和管道等组件。这使得用户能够根据需要选择性地使用或扩展不同的功能。

  2. 选择器:Scrapy 提供了灵活强大的选择器,可以通过 CSS 或 XPath 表达式轻松地提取网页中的数据。

  3. 中间件支持:用户可以通过中间件自定义处理请求和响应,例如修改请求头、实现代理、或者处理异常情况。

  4. 自动限速:Scrapy 具备自动限速功能,避免对目标网站造成过大的负担,同时支持自定义的下载延迟。

  5. 并发控制:支持异步处理和并发请求,提高爬取效率。

  6. 扩展性:Scrapy 提供了丰富的扩展接口,用户可以通过编写扩展插件实现定制化的功能。

  7. 数据存储:通过管道(Pipeline)机制,Scrapy 支持将抓取到的数据存储到多种格式,如 JSON、CSV、数据库等。

  8. 用户友好的命令行工具:Scrapy 提供了一套直观易用的命令行工具,方便用户创建、运行和管理爬虫项目。

示例代码

import scrapy

class MySpider(scrapy.Spider):
    name = 'my_spider' # 爬虫名字,后续是根据这个名字运行相关代码,而不是类名
    start_urls = ['http://example.com'] # 爬虫的入口网站

    def parse(self, response):
        # 使用选择器提取数据
        title = response.css('h1::text').get()
        body = response.css('p::text').get()

        # 返回抓取到的数据
        yield {
            'title': title,
            'body': body,
        }

这是一个简单的爬虫示例,通过定义爬虫类、指定起始 URL 和解析方法,用户可以快速创建一个基本的爬虫。

以上是 Scrapy 的简要介绍,它的灵活性和强大功能使其成为网络爬虫领域的瑞士军刀。

安装scrapy,并创建项目

使用python包管理工具pip安装scrapy

pip install scrapy

安装完成后使用scrapy创建项目

scrapy startproject sw

创建完成后,我的目录格式如下:

sw/
│
├── sw/
│   ├── __init__.py
│   ├── items.py
│   ├── middlewares.py
│   ├── pipelines.py
│   ├── settings.py
│   └── spiders/
│       └── __init__.py
│
├── scrapy.cfg
└── README.md

解释一下各个目录和文件的作用:

  • sw/sw/: 项目的 Python 模块,包含了爬虫项目的主要代码。

    • init.py: 空文件,用于指示该目录是一个 Python 包。
    • items.py: 定义用于存储爬取数据的数据模型。
    • middlewares.py: 包含自定义中间件的文件,用于处理请求和响应。
    • pipelines.py: 包含自定义管道的文件,用于处理抓取到的数据的存储和处理。
    • settings.py: 包含项目的设置和配置信息。(如果要链接数据库,记得在这个文件里填写相应信息)
    • spiders/: 存放爬虫代码的目录。
      • init.py: 空文件,用于指示该目录是一个 Python 包。
  • scrapy.cfg: Scrapy 项目的配置文件,包含有关项目的元数据和设置。

  • README.md: 项目的说明文档,可以包含有关项目的描述、使用说明等信息。

这是一个标准的 Scrapy 项目结构,您可以根据实际需求和项目规模进行调整和扩展。

运行单个脚本

代码示例

配置

先配置相关信息

item

item.py中的内容如下:

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class SwItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    url = scrapy.Field()
    title = scrapy.Field()
    time = scrapy.Field()
    content = scrapy.Field()
    scrapy_time = scrapy.Field()

    trans_title = scrapy.Field()
    trans_content = scrapy.Field()
    
    org = scrapy.Field()
    trans_org = scrapy.Field()
setting

setting.py中的内容如下:

# Scrapy settings for sw project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = "sw"

SPIDER_MODULES = ["sw.spiders"]
NEWSPIDER_MODULE = "sw.spiders"
DOWNLOAD_DELAY = 3
RANDOMIZE_DOWNLOAD_DELAY = True
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
COOKIES_ENABLED = True


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = "sw (+http://www.yourdomain.com)"

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
#    "Accept-Language": "en",
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    "sw.middlewares.SwSpiderMiddleware": 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    "sw.middlewares.SwDownloaderMiddleware": 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    "scrapy.extensions.telnet.TelnetConsole": None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    "sw.pipelines.SwPipeline": 300,
#}
# ITEM_PIPELINES = {
#    "sw.pipelines.SwPipeline": 300,
# }

# 数据库的相关配置
# DB_SETTINGS = {
#     'host': '127.0.0.1',
#     'port': 3306,
#     'user': 'root',
#     'password': '123456',
#     'db': 'scrapy_news_2024_01_08',
#     'charset': 'utf8mb4',
# }

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"

# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"
# REDIRECT_ENABLED = False

爬虫脚本

过程非常简单,只需要在spiders/目录下,创建自己的代码即可。示例代码p3_new_39.py (tips: 直接把这个代码放到spiders/目录下,在控制台中输入scrapy crawl p3_new_39 -o p3_new_39.csv即可运行!-o 后面接的是输出文件。) 如下:

"""
Created on 2024/01/06 14:00 by Fxy
"""
import scrapy
from sw.items import SwItem
import time
from datetime import datetime

class SWSpider(scrapy.Spider):
    '''
    scrapy变量
    '''
    # 爬虫名称(自己定义)
    name = "p3_new_39"
    # 允许爬取的域名
    allowed_domains = ["www.meduniwien.ac.at"]
    # 爬虫的起始链接
    start_urls = ["https://www.meduniwien.ac.at/web/en/about-us/news/"]
    # 创建一个VidoItem实例
    item = SwItem()
    '''
    自定义变量
    '''
    # 机构名称
    org = "奥地利维也纳医科大学病毒学中心"
    # 机构英文名称
    org_e = "Med Univ Vienna, Ctr Virol"
    # 日期格式
    site_date_format = '%Y-%m-%d %H:%M' # 网页的日期格式
    date_format = '%d.%m.%Y %H:%M:%S' # 目标日期格式
    # 网站语言格式
    language_type = "zh2zh"  # 中文到中文的语言代码, 调用翻译接口时,使用


    #爬虫的主入口,这里是获取所有的归档文章链接
    def parse(self,response): 
        achieve_links = response.xpath('//*[@id="c4345"]//div[@class="news-teaser__caption"]/h2/a/@href').extract()
        print("achieve_links",achieve_links)
        for achieve_link in achieve_links:
            if "http" in achieve_link:
                continue
            full_achieve_link = "https://www.meduniwien.ac.at" + achieve_link
            print("full_achieve_link", full_achieve_link)
            # 进入每个归档链接
            yield scrapy.Request(full_achieve_link, callback=self.parse_item, dont_filter=True)

        #翻页逻辑
        xpath_expression = f'//*[@id="c4345"]//ul[@class="pagination"]/li[@class="next"]/a/@href'
        next_page = response.xpath(xpath_expression).extract_first()
        print("next_page = ", next_page)

        # 翻页操作
        if next_page != None:
            print(next_page)
            print('next page')
            full_next_page = "https://www.meduniwien.ac.at" + next_page
            print("full_next_page",full_next_page)
            yield scrapy.Request(full_next_page, callback=self.parse, dont_filter=True)


    #获取每个文章的内容,并存入item
    def parse_item(self,response):
        source_url = response.url
        print("source_url:", source_url)
        title_o = response.xpath('//*[@id="main"]/header/div/div[2]/div[1]/h1/text()').extract_first().strip()
        # title_t = my_tools.get_trans(title_o, "de2zh") *[@id="c4342"]/div/div/div[2]/span
        print("title_o:", title_o) #//*[@id="c4342"]/div/div/div[2]/span
        year_string = response.xpath('//div[@class="news-detail__meta"]/span/@data-year').extract_first().strip()
        month_string = response.xpath('//div[@class="news-detail__meta"]/span/@data-month').extract_first().strip()
        day_string = response.xpath('//div[@class="news-detail__meta"]/span/@data-day').extract_first().strip()
        hour_string = response.xpath('//div[@class="news-detail__meta"]/span/@data-hour').extract_first().strip()
        minute_string = response.xpath('//div[@class="news-detail__meta"]/span/@data-minute').extract_first().strip()
        publish_time = f'{year_string}-{month_string}-{day_string} {hour_string}:{minute_string}'
        print("publish_time:", publish_time)
        date_object = datetime.strptime(publish_time, self.site_date_format) # 先读取成网页的日期格式
        date_object = date_object.strftime(self.date_format) # 转换成目标的日期字符串
        publish_time = datetime.strptime(date_object, self.date_format) # 从符合格式的字符串,转换成日期

        content_o = [content.strip() for content in response.xpath('//div[@class="content__block"]//text()').extract()]
        content_o = ' '.join(content_o) # 这个content_o提取出来是一个字符串数组,所以要拼接成字符串
        # content_t = my_tools.get_trans(content_o, "de2zh")

        print("source_url:", source_url)
        print("title_o:", title_o)
        # print("title_t:", title_t)
        print("publish_time:", publish_time) #15.01.2008
        print("content_o:", content_o)
        # print("content_t:", content_t)
        print("-" * 50)

        page_data = { 
            'source_url': source_url,
            'title_o': title_o,
            # 'title_t' : title_t,
            'publish_time': publish_time,
            'content_o': content_o,
            # 'content_t': content_t,
            'org' : self.org,
            'org_e' : self.org_e,
        }
        self.item['url'] = page_data['source_url']
        self.item['title'] = page_data['title_o']
        # self.item['title_t'] = page_data['title_t']
        self.item['time'] = page_data['publish_time']
        self.item['content'] = page_data['content_o']
        # self.item['content_t'] = page_data['content_t']
        # 获取当前时间
        current_time = datetime.now()
        # 格式化成字符串
        formatted_time = current_time.strftime(self.date_format)
        # 将字符串转换为 datetime 对象
        datetime_object = datetime.strptime(formatted_time, self.date_format)
        self.item['scrapy_time'] = datetime_object
        self.item['org'] = page_data['org']
        self.item['trans_org'] = page_data['org_e']

        yield self.item

在控制台中输入scrapy crawl p3_new_39 -o p3_new_39.csv即可运行!-o 后面接的是输出文件。

代码解析

接下来我们来分析,上面p3_new_39.py代码中的response.xpath()中的参数如何确定。
1、首先进入网页: https://www.meduniwien.ac.at/web/en/about-us/news/(外):

2、确定自己要爬取的数据,这里我假定为每一条新闻。
scrapy爬虫实战_第1张图片
3、打开浏览器的调试工具(默认f12)找到跳转的链接。
scrapy爬虫实战_第2张图片
4、右键元素,复制xpath即可。
scrapy爬虫实战_第3张图片
我的复制结果如下://*[@id="c4345"]/div/div[1]/div/h2/a

xpath基本语法:

XPath(XML Path Language)是一种用于在 XML 文档中定位和选择节点的查询语言。它不仅可以用于 XML,还可以用于 HTML 和其他标记语言。以下是XPath的主要语法和一些常见用法:

  1. 节点选择

    • /: 从根节点开始选择。
    • //: 选择节点,不考虑它们的位置。
    • .: 选取当前节点。
    • ..: 选取当前节点的父节点。
  2. 节点名称

    • elementName: 选取所有名称为 elementName 的节点。
    • *: 选取所有子节点。
  3. 谓语

    • [condition]: 通过添加条件筛选节点。
      • 例如://div[@class='example'] 选取所有 class 属性为 ‘example’ 的 div 节点。

路径表达式示例:

  • /bookstore/book[1]: 选取第一个 元素。
  • /bookstore/book[last()]: 选取最后一个 元素。
  • /bookstore/book[position()<3]: 选取前两个 元素。
  • //title[@lang='en']: 选取所有带有 lang 属性为 ‘en’ 的 </code> 元素。</li> <li><code>//title[@lang='en']/text()</code>: 选取所有带有 lang 属性为 ‘en’ 的 <code><title></code> 元素的文本内容。</li> <li><code>//title[not(@lang='en')]/text()</code>: 选取所有 lang 属性<mark>不</mark>为 ‘en’ 的 <code><title></code> 元素的文本内容。</li> <li><code>//title[@lang='en' or @lang='zh']/text()</code>: 选取所有带有 lang 属性为 ‘en’ <mark>或</mark> ‘zh’ 的 <code><title></code> 元素的文本内容。</li> <li><code>//title[contains(@lang, 'en')]/text()</code> 选取所有带有 lang 属性<mark>包含</mark> 'en’子串 的 <code><title></code> 元素的文本内容。</li> </ul> <h4>通配符和多路径:</h4> <ul> <li><code>*</code>: 通配符,匹配任何元素节点。</li> <li><code>@*</code>: 匹配任何属性节点。</li> <li><code>//book/title | //book/price</code>: 选取所有 <code><book></code> 元素的 <code><title></code> 和 <code><price></code> 子元素。</li> </ul> <h4>函数:</h4> <p>XPath 还支持一些内置函数,例如:</p> <ul> <li><code>text()</code>: 获取节点的文本内容。</li> <li><code>contains(str1, str2)</code>: 判断一个字符串是否包含另一个字符串。</li> </ul> <h4>示例:</h4> <p>考虑以下 XML 结构:</p> <pre><code class="prism language-xml"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>bookstore</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>book</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>title</span> <span class="token attr-name">lang</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>en<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>Introduction to XPath<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>title</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>price</span><span class="token punctuation">></span></span>29.95<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>price</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>book</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>book</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>title</span> <span class="token attr-name">lang</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>fr<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>XPath et ses applications<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>title</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>price</span><span class="token punctuation">></span></span>39.99<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>price</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>book</span><span class="token punctuation">></span></span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>bookstore</span><span class="token punctuation">></span></span> </code></pre> <p>使用 XPath 可以选择如下:</p> <ul> <li><code>/bookstore/book</code>: 选取所有 <code><book></code> 元素。</li> <li><code>/bookstore/book/title[@lang='en']</code>: 选取所有带有 lang 属性为 ‘en’ 的 <code><title></code> 元素。</li> </ul> <p>这是XPath的基本语法和用法示例,它允许您灵活而精确地定位和提取 XML 或 HTML 文档中的数据。</p> <h2>批量运行</h2> <p>在<code>sw</code>项目目录下可以创建一个mian.py</p> <pre><code class="prism language-python"><span class="token keyword">from</span> scrapy<span class="token punctuation">.</span>crawler <span class="token keyword">import</span> CrawlerProcess <span class="token keyword">from</span> scrapy<span class="token punctuation">.</span>utils<span class="token punctuation">.</span>project <span class="token keyword">import</span> get_project_settings settings <span class="token operator">=</span> get_project_settings<span class="token punctuation">(</span><span class="token punctuation">)</span> crawler <span class="token operator">=</span> CrawlerProcess<span class="token punctuation">(</span>settings<span class="token punctuation">)</span> bot_list <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">"p3_new_39"</span><span class="token punctuation">]</span> <span class="token comment"># 把要运行的通通放进去</span> <span class="token keyword">for</span> bot <span class="token keyword">in</span> bot_list<span class="token punctuation">:</span> crawler<span class="token punctuation">.</span>crawl<span class="token punctuation">(</span>bot<span class="token punctuation">)</span> crawler<span class="token punctuation">.</span>start<span class="token punctuation">(</span><span class="token punctuation">)</span> </code></pre> <h2>附录1,持久化存入数据库</h2> <p>scrapy有个非常好的特点,就是支持自动存入数据库。我们只要将代码写好,然后每次scrapy都会自动调用该代码,不需要自己显示的调用,非常省心。我的<code>piplines.py</code>代码如下:</p> <pre><code class="prism language-python"><span class="token comment"># Define your item pipelines here</span> <span class="token comment">#</span> <span class="token comment"># Don't forget to add your pipeline to the ITEM_PIPELINES setting</span> <span class="token comment"># See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html</span> <span class="token comment"># useful for handling different item types with a single interface</span> <span class="token keyword">from</span> itemadapter <span class="token keyword">import</span> ItemAdapter <span class="token keyword">import</span> pymysql <span class="token keyword">class</span> <span class="token class-name">SwPipeline</span><span class="token punctuation">:</span> <span class="token keyword">def</span> <span class="token function">__init__</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> db_settings<span class="token punctuation">)</span><span class="token punctuation">:</span> self<span class="token punctuation">.</span>db_settings <span class="token operator">=</span> db_settings <span class="token decorator annotation punctuation">@classmethod</span> <span class="token keyword">def</span> <span class="token function">from_crawler</span><span class="token punctuation">(</span>cls<span class="token punctuation">,</span> crawler<span class="token punctuation">)</span><span class="token punctuation">:</span> db_settings <span class="token operator">=</span> crawler<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">"DB_SETTINGS"</span><span class="token punctuation">)</span> <span class="token keyword">return</span> cls<span class="token punctuation">(</span>db_settings<span class="token punctuation">)</span> <span class="token keyword">def</span> <span class="token function">open_spider</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span> self<span class="token punctuation">.</span>connection <span class="token operator">=</span> pymysql<span class="token punctuation">.</span>connect<span class="token punctuation">(</span><span class="token operator">**</span>self<span class="token punctuation">.</span>db_settings<span class="token punctuation">)</span> self<span class="token punctuation">.</span>cursor <span class="token operator">=</span> self<span class="token punctuation">.</span>connection<span class="token punctuation">.</span>cursor<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">def</span> <span class="token function">close_spider</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span> self<span class="token punctuation">.</span>connection<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">def</span> <span class="token function">process_item</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> item<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span> <span class="token comment"># Assuming your item keys match the column names in your database table</span> keys <span class="token operator">=</span> <span class="token string">', '</span><span class="token punctuation">.</span>join<span class="token punctuation">(</span>item<span class="token punctuation">.</span>keys<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span> values <span class="token operator">=</span> <span class="token string">', '</span><span class="token punctuation">.</span>join<span class="token punctuation">(</span><span class="token punctuation">[</span><span class="token string">'%s'</span><span class="token punctuation">]</span> <span class="token operator">*</span> <span class="token builtin">len</span><span class="token punctuation">(</span>item<span class="token punctuation">)</span><span class="token punctuation">)</span> query <span class="token operator">=</span> <span class="token string-interpolation"><span class="token string">f"INSERT INTO org_news (</span><span class="token interpolation"><span class="token punctuation">{</span>keys<span class="token punctuation">}</span></span><span class="token string">) VALUES (</span><span class="token interpolation"><span class="token punctuation">{</span>values<span class="token punctuation">}</span></span><span class="token string">)"</span></span> <span class="token comment">#这里记得确认我们item.py中定义的类型名字,是否和数据库中的一样,不一样,这个查询语句需要进行相应的修改,我这里就不一样,所以没法直接运行哦,我懒得改了!!!!哈哈哈</span> <span class="token comment"># Check if the record already exists based on a combination of fields, 如果记录已经存在则取消插入,如果有键的话直接用键就行,我这里没有键</span> unique_fields <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">"title_o"</span><span class="token punctuation">,</span> <span class="token string">"source_url"</span><span class="token punctuation">]</span> <span class="token comment"># Replace with the actual field names you want to use</span> check_query <span class="token operator">=</span> <span class="token string-interpolation"><span class="token string">f"SELECT * FROM org_news WHERE </span><span class="token interpolation"><span class="token punctuation">{</span><span class="token string">' AND '</span><span class="token punctuation">.</span>join<span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'</span><span class="token interpolation"><span class="token punctuation">{</span>field<span class="token punctuation">}</span></span><span class="token string"> = %s'</span></span> <span class="token keyword">for</span> field <span class="token keyword">in</span> unique_fields<span class="token punctuation">)</span><span class="token punctuation">}</span></span><span class="token string">"</span></span> check_values <span class="token operator">=</span> <span class="token builtin">tuple</span><span class="token punctuation">(</span>item<span class="token punctuation">.</span>get<span class="token punctuation">(</span>field<span class="token punctuation">)</span> <span class="token keyword">for</span> field <span class="token keyword">in</span> unique_fields<span class="token punctuation">)</span> <span class="token keyword">try</span><span class="token punctuation">:</span> <span class="token comment"># Check if the record already exists</span> self<span class="token punctuation">.</span>cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span>check_query<span class="token punctuation">,</span> check_values<span class="token punctuation">)</span> existing_record <span class="token operator">=</span> self<span class="token punctuation">.</span>cursor<span class="token punctuation">.</span>fetchone<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">if</span> existing_record<span class="token punctuation">:</span> spider<span class="token punctuation">.</span>logger<span class="token punctuation">.</span>warning<span class="token punctuation">(</span><span class="token string">"Record already exists. Skipping insertion."</span><span class="token punctuation">)</span> <span class="token keyword">else</span><span class="token punctuation">:</span> <span class="token comment"># If the record doesn't exist, insert it into the database</span> self<span class="token punctuation">.</span>cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span>query<span class="token punctuation">,</span> <span class="token builtin">tuple</span><span class="token punctuation">(</span>item<span class="token punctuation">.</span>values<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">)</span> self<span class="token punctuation">.</span>connection<span class="token punctuation">.</span>commit<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">except</span> Exception <span class="token keyword">as</span> e<span class="token punctuation">:</span> self<span class="token punctuation">.</span>connection<span class="token punctuation">.</span>rollback<span class="token punctuation">(</span><span class="token punctuation">)</span> spider<span class="token punctuation">.</span>logger<span class="token punctuation">.</span>error<span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f"Error processing item and inserting data into database: </span><span class="token interpolation"><span class="token punctuation">{</span>e<span class="token punctuation">}</span></span><span class="token string">"</span></span><span class="token punctuation">)</span> <span class="token keyword">return</span> item </code></pre> <p>我创建数据库的代码如下:</p> <pre><code class="prism language-sql"><span class="token comment">/* Navicat Premium Data Transfer Source Server : 172.16.6.165 Source Server Type : MySQL Source Server Version : 80035 Source Host : 172.16.6.165:3306 Source Schema : swaq Target Server Type : MySQL Target Server Version : 80035 File Encoding : 65001 Date: 08/01/2024 10:01:57 */</span> <span class="token keyword">SET</span> NAMES utf8mb4<span class="token punctuation">;</span> <span class="token keyword">SET</span> FOREIGN_KEY_CHECKS <span class="token operator">=</span> <span class="token number">0</span><span class="token punctuation">;</span> <span class="token comment">-- ----------------------------</span> <span class="token comment">-- Table structure for org_news</span> <span class="token comment">-- ----------------------------</span> <span class="token keyword">DROP</span> <span class="token keyword">TABLE</span> <span class="token keyword">IF</span> <span class="token keyword">EXISTS</span> <span class="token identifier"><span class="token punctuation">`</span>org_news<span class="token punctuation">`</span></span><span class="token punctuation">;</span> <span class="token keyword">CREATE</span> <span class="token keyword">TABLE</span> <span class="token identifier"><span class="token punctuation">`</span>org_news<span class="token punctuation">`</span></span> <span class="token punctuation">(</span> <span class="token identifier"><span class="token punctuation">`</span>id<span class="token punctuation">`</span></span> <span class="token keyword">int</span><span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span> <span class="token operator">NOT</span> <span class="token boolean">NULL</span> <span class="token keyword">AUTO_INCREMENT</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>title_t<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">1000</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'翻译标题名称'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>title_o<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">1000</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'原消息标题'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>content_t<span class="token punctuation">`</span></span> <span class="token keyword">longtext</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'翻译'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>publish_time<span class="token punctuation">`</span></span> <span class="token keyword">datetime</span><span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span> <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'发布时间'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>content_o<span class="token punctuation">`</span></span> <span class="token keyword">longtext</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'原文'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>site<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">255</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'新闻源'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>tag<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">255</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'标签'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>author<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">255</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'作者'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>create_time<span class="token punctuation">`</span></span> <span class="token keyword">datetime</span><span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">)</span> <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'爬取时间'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>source_url<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">1000</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'url'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>country<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">100</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'国家/地区'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>imgurl<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">255</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'图片存放地址'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>org<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">255</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'机构名称'</span><span class="token punctuation">,</span> <span class="token identifier"><span class="token punctuation">`</span>org_e<span class="token punctuation">`</span></span> <span class="token keyword">varchar</span><span class="token punctuation">(</span><span class="token number">255</span><span class="token punctuation">)</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> utf8mb3 <span class="token keyword">COLLATE</span> utf8mb3_general_ci <span class="token boolean">NULL</span> <span class="token keyword">DEFAULT</span> <span class="token boolean">NULL</span> <span class="token keyword">COMMENT</span> <span class="token string">'机构英文名称'</span><span class="token punctuation">,</span> <span class="token keyword">PRIMARY</span> <span class="token keyword">KEY</span> <span class="token punctuation">(</span><span class="token identifier"><span class="token punctuation">`</span>id<span class="token punctuation">`</span></span><span class="token punctuation">)</span> <span class="token keyword">USING</span> <span class="token keyword">BTREE</span> <span class="token punctuation">)</span> <span class="token keyword">ENGINE</span> <span class="token operator">=</span> <span class="token keyword">InnoDB</span> <span class="token keyword">AUTO_INCREMENT</span> <span class="token operator">=</span> <span class="token number">314</span> <span class="token keyword">CHARACTER</span> <span class="token keyword">SET</span> <span class="token operator">=</span> utf8mb3 <span class="token keyword">COLLATE</span> <span class="token operator">=</span> utf8mb3_general_ci ROW_FORMAT <span class="token operator">=</span> Dynamic<span class="token punctuation">;</span> <span class="token keyword">SET</span> FOREIGN_KEY_CHECKS <span class="token operator">=</span> <span class="token number">1</span><span class="token punctuation">;</span> </code></pre> <h2>附录2,如何在本地启动数据库</h2> <p>1、先安装mysql<br> 2、配置好?(我电脑的mysql几年前安装的,下载了一个navicat)<br> 3、windows控制台输入<code>mysqld --console</code>应该就能启动了。<br> 注意: <code>mysqld 是服务端程序</code> ,<code>mysql是命令行客户端程序</code><br> 4、然后应该就能够连接了</p> </div> </div> </div> </div> </div> <!--PC和WAP自适应版--> <div id="SOHUCS" sid="1746010954237231104"></div> <script type="text/javascript" src="/views/front/js/chanyan.js"></script> <!-- 文章页-底部 动态广告位 --> <div class="youdao-fixed-ad" id="detail_ad_bottom"></div> </div> <div class="col-md-3"> <div class="row" id="ad"> <!-- 文章页-右侧1 动态广告位 --> <div id="right-1" class="col-lg-12 col-md-12 col-sm-4 col-xs-4 ad"> <div class="youdao-fixed-ad" id="detail_ad_1"> </div> </div> <!-- 文章页-右侧2 动态广告位 --> <div id="right-2" class="col-lg-12 col-md-12 col-sm-4 col-xs-4 ad"> <div class="youdao-fixed-ad" id="detail_ad_2"></div> </div> <!-- 文章页-右侧3 动态广告位 --> <div id="right-3" class="col-lg-12 col-md-12 col-sm-4 col-xs-4 ad"> <div class="youdao-fixed-ad" id="detail_ad_3"></div> </div> </div> </div> </div> </div> </div> <div class="container"> <h4 class="pt20 mb15 mt0 border-top">你可能感兴趣的:(爬虫,scrapy,爬虫)</h4> <div id="paradigm-article-related"> <div class="recommend-post mb30"> <ul class="widget-links"> <li><a href="/article/1950175452580605952.htm" title="Gerapy爬虫管理框架深度解析:企业级分布式爬虫管控平台" target="_blank">Gerapy爬虫管理框架深度解析:企业级分布式爬虫管控平台</a> <span class="text-muted">Python×CATIA工业智造</span> <a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%88%86%E5%B8%83%E5%BC%8F/1.htm">分布式</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/pycharm/1.htm">pycharm</a> <div>引言:爬虫工程化的必然选择随着企业数据采集需求指数级增长,传统单点爬虫管理模式面临三重困境:管理效率瓶颈:手动部署耗时占开发总时长的40%以上系统可靠性低:研究显示超过65%的爬虫故障源于部署或调度错误资源利用率差:平均爬虫服务器CPU利用率不足30%爬虫管理方案对比:┌───────────────┬─────────────┬───────────┬───────────┬──────────</div> </li> <li><a href="/article/1949945858665541632.htm" title="Python爬虫【五十八章】Python数据清洗与分析全攻略:从Pandas到深度学习的异常检测进阶" target="_blank">Python爬虫【五十八章】Python数据清洗与分析全攻略:从Pandas到深度学习的异常检测进阶</a> <span class="text-muted">程序员_CLUB</span> <a class="tag" taget="_blank" href="/search/Python%E5%85%A5%E9%97%A8%E5%88%B0%E8%BF%9B%E9%98%B6/1.htm">Python入门到进阶</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/pandas/1.htm">pandas</a> <div>目录背景与需求分析第一章:结构化数据清洗实战(Pandas核心技法)1.1数据去重策略矩阵1.2智能缺失值处理体系第二章:深度学习异常检测进阶2.1自动编码器异常检测(时序数据)2.2图神经网络异常检测(关系型数据)第三章:综合案例实战案例1:金融交易反欺诈系统案例2:工业传感器异常检测第四章:性能优化与工程实践4.1大数据处理加速技巧4.2模型部署方案第五章:方法论总结与展望5.1方法论框架5.</div> </li> <li><a href="/article/1949945859365990400.htm" title="Python【一】Python全方位知识指南" target="_blank">Python【一】Python全方位知识指南</a> <span class="text-muted">程序员_CLUB</span> <a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a> <div>目录背景:为什么Python成为开发者必备技能?‌‌一、Python是什么?‌‌二、Python能做什么?六大核心应用场景‌‌1.自动化办公‌‌2.网络爬虫‌‌3.数据分析‌‌三、零基础入门Python:环境搭建与学习路径‌‌1.环境搭建(Windows/Mac详细步骤)‌2‌.基础语法速成(7天掌握)‌四、实战项目推荐(*****)‌‌五、学习建议与避坑指南(新手常见错误)‌六、总结:**背景:</div> </li> <li><a href="/article/1949945732429574144.htm" title="Python爬虫【三十五章】爬虫高阶:基于Docker集群的动态页面自动化采集系统实战" target="_blank">Python爬虫【三十五章】爬虫高阶:基于Docker集群的动态页面自动化采集系统实战</a> <span class="text-muted">程序员_CLUB</span> <a class="tag" taget="_blank" href="/search/Python%E5%85%A5%E9%97%A8%E5%88%B0%E8%BF%9B%E9%98%B6/1.htm">Python入门到进阶</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/docker/1.htm">docker</a> <div>目录一、技术演进与行业痛点二、核心技术栈深度解析2.1动态渲染三件套2.2Docker集群架构设计2.3自动化调度系统三、进阶实战案例3.1电商价格监控系统1.技术指标对比2.实现细节3.2新闻聚合平台1.WebSocket监控2.字体反爬破解四、性能优化与运维方案4.1资源消耗对比测试4.2集群运维体系五、总结与未来展望六、Python爬虫相关文章(推荐)一、技术演进与行业痛点在Web3.0时代</div> </li> <li><a href="/article/1949945604893372416.htm" title="Python爬虫【三十二章】爬虫高阶:动态页面处理与Scrapy+Selenium+BeautifulSoup分布式架构深度解析实战" target="_blank">Python爬虫【三十二章】爬虫高阶:动态页面处理与Scrapy+Selenium+BeautifulSoup分布式架构深度解析实战</a> <span class="text-muted"></span> <div>目录引言一、动态页面爬取的技术背景1.1动态页面的核心特征1.2传统爬虫的局限性二、技术选型与架构设计2.1核心组件分析2.2架构设计思路1.分层处理2.数据流三、代码实现与关键技术3.1Selenium与Scrapy的中间件集成3.2BeautifulSoup与ScrapyItem的整合3.3分布式爬取实现3.3.1Scrapy-Redis部署3.3.2多节点启动四、优化与扩展4.1性能优化策略</div> </li> <li><a href="/article/1949945605325385728.htm" title="Python爬虫【三十三章】爬虫高阶:动态页面破解与验证码OCR识别全流程实战" target="_blank">Python爬虫【三十三章】爬虫高阶:动态页面破解与验证码OCR识别全流程实战</a> <span class="text-muted">程序员_CLUB</span> <a class="tag" taget="_blank" href="/search/Python%E5%85%A5%E9%97%A8%E5%88%B0%E8%BF%9B%E9%98%B6/1.htm">Python入门到进阶</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/ocr/1.htm">ocr</a> <div>目录一、技术背景与行业痛点二、核心技术与实现路径2.1动态页面处理方案对比2.2Selenium深度集成实践2.3OCR验证码破解方案1.预处理阶段:2.识别阶段:3.后处理阶段三、典型应用场景解析3.1电商价格监控系统1.技术架构2.实现效果3.2社交媒体舆情分析1.特殊挑战2.优化方案:四、合规性与风险控制五、总结Python爬虫相关文章(推荐)一、技术背景与行业痛点在Web3.0时代,网站反</div> </li> <li><a href="/article/1949945606000668672.htm" title="Python爬虫【三十四章】爬虫高阶:动态页面处理与Playwright增强控制深度解析" target="_blank">Python爬虫【三十四章】爬虫高阶:动态页面处理与Playwright增强控制深度解析</a> <span class="text-muted">程序员_CLUB</span> <a class="tag" taget="_blank" href="/search/Python%E5%85%A5%E9%97%A8%E5%88%B0%E8%BF%9B%E9%98%B6/1.htm">Python入门到进阶</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a> <div>目录一、技术演进背景与行业挑战二、核心技术栈深度解析2.1动态渲染双引擎架构2.2浏览器指纹伪装方案2.3BeautifulSoup集成实践三、进阶应用场景突破3.1电商价格监控系统3.1.1技术架构创新3.1.2实现效果3.2社交媒体舆情分析3.2.1无限滚动模拟3.2.2WebSocket监控3.2.3Canvas指纹防护四、性能优化与合规方案4.1资源消耗对比测试4.2反爬对抗升级方案五、总</div> </li> <li><a href="/article/1949943967034437632.htm" title="Python爬虫【三十一章】爬虫高阶:动态页面处理与Scrapy+Selenium+Celery弹性伸缩架构实战" target="_blank">Python爬虫【三十一章】爬虫高阶:动态页面处理与Scrapy+Selenium+Celery弹性伸缩架构实战</a> <span class="text-muted"></span> <div>目录引言一、动态页面爬取的技术挑战1.1动态页面的核心特性1.2传统爬虫的局限性二、Scrapy+Selenium:动态爬虫的核心架构2.1技术选型依据2.2架构设计2.3代码实现示例三、Celery:分布式任务队列的引入3.1为什么需要Celery?3.2Celery架构设计3.3代码实现示例3.4Scrapy与Celery的集成四、优化与扩展4.1性能优化4.2分布式部署4.3反爬对抗五、总结</div> </li> <li><a href="/article/1949899078280212480.htm" title="十年爬虫经验告诉你爬虫被封怎么办" target="_blank">十年爬虫经验告诉你爬虫被封怎么办</a> <span class="text-muted">congqian8750</span> <a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a> <div>十年爬虫经验告诉你爬虫被封怎么办现在很多站长都会有抓取数据的需求,因此网络爬虫在一定程度上越来越火爆,其实爬虫的基本功能很简单,就是分析大量的url的html页面,从而提取新的url,但是在实际操作中通常都会遇到各种各样的问题,比如说抓取数据的过程中需要根据实际需求来筛选url继续爬行;或者说为了能正常爬取,减少别人服务器的压力,你需要控制住爬取的速度和工作量···但是即便再小心,很多时候也会遇到</div> </li> <li><a href="/article/1949898823811788800.htm" title="【NLP舆情分析】基于python微博舆情分析可视化系统(flask+pandas+echarts) 视频教程 - 微博文章数据可视化分析-文章分类下拉框实现" target="_blank">【NLP舆情分析】基于python微博舆情分析可视化系统(flask+pandas+echarts) 视频教程 - 微博文章数据可视化分析-文章分类下拉框实现</a> <span class="text-muted">java1234_小锋</span> <a class="tag" taget="_blank" href="/search/NLP/1.htm">NLP</a><a class="tag" taget="_blank" href="/search/NLLP%E5%BE%AE%E5%8D%9A%E8%88%86%E6%83%85%E5%88%86%E6%9E%90/1.htm">NLLP微博舆情分析</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E8%87%AA%E7%84%B6%E8%AF%AD%E8%A8%80%E5%A4%84%E7%90%86/1.htm">自然语言处理</a><a class="tag" taget="_blank" href="/search/flask/1.htm">flask</a> <div>大家好,我是java1234_小锋老师,最近写了一套【NLP舆情分析】基于python微博舆情分析可视化系统(flask+pandas+echarts)视频教程,持续更新中,计划月底更新完,感谢支持。今天讲解微博文章数据可视化分析-文章分类下拉框实现视频在线地址:2026版【NLP舆情分析】基于python微博舆情分析可视化系统(flask+pandas+echarts+爬虫)视频教程(火爆连载更</div> </li> <li><a href="/article/1949897938671038464.htm" title="Scrapy 爬虫 IP 被封问题的解决方案" target="_blank">Scrapy 爬虫 IP 被封问题的解决方案</a> <span class="text-muted">杨胜增</span> <a class="tag" taget="_blank" href="/search/scrapy/1.htm">scrapy</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/tcp%2Fip/1.htm">tcp/ip</a> <div>Scrapy爬虫IP被封问题的解决方案在使用Scrapy进行网络爬虫开发时,IP被封是一个常见的问题。当爬虫频繁地向目标网站发送请求时,目标网站可能会检测到异常流量,并将爬虫的IP地址加入黑名单,导致后续请求无法正常访问。本文将详细介绍Scrapy爬虫IP被封问题的原因及解决方案。问题描述在运行Scrapy爬虫时,可能会遇到以下类似的情况:请求返回403Forbidden错误,表示服务器拒绝了请求</div> </li> <li><a href="/article/1949897179338436608.htm" title="Python requests设置代理的3种方法" target="_blank">Python requests设置代理的3种方法</a> <span class="text-muted">爱睡觉的圈圈</span> <a class="tag" taget="_blank" href="/search/%E4%BB%A3%E7%90%86%E6%9C%8D%E5%8A%A1/1.htm">代理服务</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%BD%91%E7%BB%9C/1.htm">网络</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/%E4%BB%A3%E7%90%86%E6%A8%A1%E5%BC%8F/1.htm">代理模式</a> <div>在进行网络爬虫或数据采集时,经常需要使用代理来避免IP被封或突破访问限制。本文介绍Pythonrequests库设置代理的3种常用方法。方法一:基础代理设置最简单的代理设置方式:importrequests#设置代理proxies={'http':'http://proxy_ip:port','https':'https://proxy_ip:port'}#发送请求response=request</div> </li> <li><a href="/article/1949897180282155008.htm" title="代理IP的类型详解:数据中心vs住宅IP" target="_blank">代理IP的类型详解:数据中心vs住宅IP</a> <span class="text-muted"></span> <div>前言做爬虫的时候,代理IP是绕不开的话题。但很多人对代理IP的分类不太了解,经常花了钱却买到不合适的代理,结果还是被封。今天详细聊聊代理IP的分类,特别是数据中心IP和住宅IP的区别,帮你选到最适合的代理。代理IP基础分类按协议分类HTTP代理#只支持HTTP协议proxy={'http':'http://username:password@proxy.com:8080'}HTTPS代理#支持HT</div> </li> <li><a href="/article/1949897180768694272.htm" title="如何避免IP被加入黑名单:实用防护指南" target="_blank">如何避免IP被加入黑名单:实用防护指南</a> <span class="text-muted">爱睡觉的圈圈</span> <a class="tag" taget="_blank" href="/search/%E4%BB%A3%E7%90%86%E6%9C%8D%E5%8A%A1/1.htm">代理服务</a><a class="tag" taget="_blank" href="/search/tcp%2Fip/1.htm">tcp/ip</a><a class="tag" taget="_blank" href="/search/%E7%BD%91%E7%BB%9C%E5%8D%8F%E8%AE%AE/1.htm">网络协议</a><a class="tag" taget="_blank" href="/search/%E7%BD%91%E7%BB%9C/1.htm">网络</a> <div>前言IP被封是爬虫开发者最头疼的问题。很多人以为换个User-Agent就能解决,结果还是被秒封。现代反爬虫系统已经非常智能,不仅看IP访问频率,还会分析浏览器指纹、行为模式、TLS指纹等多个维度。要想真正避免被封,需要从多个角度进行防护。今天分享一套完整的IP保护方案,结合Selenium、指纹浏览器等成熟工具,让你的爬虫更像真实用户。反爬虫检测原理网站如何识别爬虫#现代反爬虫系统的检测维度de</div> </li> <li><a href="/article/1949897182211534848.htm" title="爬虫入门:为什么你的爬虫需要代理IP?" target="_blank">爬虫入门:为什么你的爬虫需要代理IP?</a> <span class="text-muted"></span> <div>前言作为一名在爬虫领域摸爬滚打多年的程序员,我经常收到新手朋友的疑问:"为什么我的爬虫跑了一会儿就不工作了?"今天,我就来详细讲解为什么爬虫需要代理IP,以及如何正确使用代理IP来提升爬虫的稳定性和效率。一、爬虫面临的挑战1.1反爬虫机制的普及现代网站都配备了各种反爬虫机制,最常见的包括:反爬虫机制IP限制User-Agent检测验证码行为分析请求频率限制1.2IP封禁的痛点让我们看一个典型的爬虫</div> </li> <li><a href="/article/1949895033859665920.htm" title="Python爬虫IP被封的5种解决方案" target="_blank">Python爬虫IP被封的5种解决方案</a> <span class="text-muted"></span> <div>前言做爬虫的朋友都遇到过这种情况:程序跑得好好的,突然就开始返回403错误,或者直接连接超时。十有八九是IP被网站封了。现在的网站反爬虫越来越严格,稍微频繁一点就会被拉黑。今天分享几个实用的解决方案,都是我在实际项目中用过的。方案一:代理IP池这是最直接的办法,换个马甲继续干活。基本实现importrequestsimportrandomimporttimeclassProxyPool:def__</div> </li> <li><a href="/article/1949893017594818560.htm" title="Python爬虫实战:研究picloud相关技术" target="_blank">Python爬虫实战:研究picloud相关技术</a> <span class="text-muted">ylfhpy</span> <a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB%E9%A1%B9%E7%9B%AE%E5%AE%9E%E6%88%98/1.htm">爬虫项目实战</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/picloud/1.htm">picloud</a> <div>一、引言1.1研究背景与意义在数字化时代,网络数据已成为企业决策、学术研究和社会服务的重要资源。爬虫技术作为自动化获取网络信息的关键手段,在舆情监测、市场分析、学术研究等领域具有广泛应用。Python以其简洁的语法和丰富的爬虫库(如Requests、BeautifulSoup、Scrapy)成为爬虫开发的首选语言。然而,面对海量数据和高并发需求,本地爬虫系统往往面临性能瓶颈。picloud作为专业</div> </li> <li><a href="/article/1949893018341404672.htm" title="Python爬虫实战:研究flanker相关技术" target="_blank">Python爬虫实战:研究flanker相关技术</a> <span class="text-muted">ylfhpy</span> <a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB%E9%A1%B9%E7%9B%AE%E5%AE%9E%E6%88%98/1.htm">爬虫项目实战</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/flanker/1.htm">flanker</a> <div>1.引言1.1研究背景与意义在当今信息爆炸的时代,互联网上的数据量呈现出指数级增长的趋势。如何从海量的网页数据中高效地获取有价值的信息,成为了一个重要的研究课题。网络爬虫作为一种自动获取网页内容的技术,能够帮助用户快速、准确地收集所需的信息,因此在信息检索、数据挖掘、舆情分析等领域得到了广泛的应用。Flanker技术是一种基于文本分析的信息提取技术,它能够从非结构化的文本中识别和提取出特定类型的信</div> </li> <li><a href="/article/1949892890675179520.htm" title="Python爬虫实战入门:手把手教你抓取豆瓣电影TOP250" target="_blank">Python爬虫实战入门:手把手教你抓取豆瓣电影TOP250</a> <span class="text-muted">xiaobindeshijie7</span> <a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/%E5%85%B6%E4%BB%96/1.htm">其他</a> <div>文章目录一、环境准备(5分钟搞定)二、第一个爬虫实战(超简单版)2.1基础版代码2.2代码解剖(新人必看)三、突破反爬机制(实战精华)3.1伪装大法3.2请求频率控制3.3代理IP使用四、数据存储(多种姿势)4.1CSV存储4.2MySQL存储五、进阶技巧(高手必备)5.1异步爬虫5.2Selenium动态渲染六、法律与伦理(超级重要!!!)七、下一步学习路线一、环境准备(5分钟搞定)工欲善其事必</div> </li> <li><a href="/article/1949885953996812288.htm" title="BeautifulSoup库深度解析:Python高效解析网页数据的秘籍" target="_blank">BeautifulSoup库深度解析:Python高效解析网页数据的秘籍</a> <span class="text-muted"></span> <div>在Python爬虫开发领域,获取网页内容后,如何高效解析并提取所需数据是关键一环。BeautifulSoup库凭借其简洁易用、功能强大的特点,成为众多开发者解析网页数据的首选工具。本文将深入剖析BeautifulSoup库,通过丰富的实例,帮助你掌握其核心功能与使用技巧,实现网页数据的精准提取。一、认识BeautifulSoup库BeautifulSoup是Python的一个第三方库,主要用于解析</div> </li> <li><a href="/article/1949885827723096064.htm" title="Python BeautifulSoup 解析网页按钮元素" target="_blank">Python BeautifulSoup 解析网页按钮元素</a> <span class="text-muted">PythonAI编程架构实战家</span> <a class="tag" taget="_blank" href="/search/Python%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E4%B8%8E%E5%A4%A7%E6%95%B0%E6%8D%AE/1.htm">Python人工智能与大数据</a><a class="tag" taget="_blank" href="/search/Python%E7%BC%96%E7%A8%8B%E4%B9%8B%E9%81%93/1.htm">Python编程之道</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/beautifulsoup/1.htm">beautifulsoup</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/ai/1.htm">ai</a> <div>PythonBeautifulSoup解析网页按钮元素:从基础原理到工程实践的深度解析关键词BeautifulSoup、HTML解析、按钮元素定位、DOM树遍历、CSS选择器、网络爬虫、前端自动化摘要本文系统解析使用PythonBeautifulSoup库定位和提取网页按钮元素的全流程技术方案。从HTML文档的底层结构出发,结合BeautifulSoup的核心解析机制,覆盖从基础概念到高级工程实践</div> </li> <li><a href="/article/1949884187657957376.htm" title="Python网络爬虫技术深度解析:从入门到高级实战" target="_blank">Python网络爬虫技术深度解析:从入门到高级实战</a> <span class="text-muted">Python爬虫项目</span> <a class="tag" taget="_blank" href="/search/2025%E5%B9%B4%E7%88%AC%E8%99%AB%E5%AE%9E%E6%88%98%E9%A1%B9%E7%9B%AE/1.htm">2025年爬虫实战项目</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/easyui/1.htm">easyui</a><a class="tag" taget="_blank" href="/search/scrapy/1.htm">scrapy</a> <div>1.爬虫技术概述网络爬虫(WebCrawler)是一种自动化程序,通过模拟人类浏览行为从互联网上抓取、解析和存储数据。根据应用场景可分为:通用爬虫:如搜索引擎的蜘蛛程序聚焦爬虫:针对特定领域的数据采集增量式爬虫:只抓取更新内容深层网络爬虫:处理需要交互的动态内容2.2024年Python爬虫技术栈技术分类推荐工具适用场景基础请求库requests,httpx静态页面请求解析库BeautifulSo</div> </li> <li><a href="/article/1949858838136025088.htm" title="XPath" target="_blank">XPath</a> <span class="text-muted">class心平气和</span> <a class="tag" taget="_blank" href="/search/%E6%9C%8D%E5%8A%A1%E5%99%A8/1.htm">服务器</a><a class="tag" taget="_blank" href="/search/%E5%89%8D%E7%AB%AF/1.htm">前端</a><a class="tag" taget="_blank" href="/search/%E8%BF%90%E7%BB%B4/1.htm">运维</a> <div>一、XPath基础概念XPath(XMLPathLanguage)是一种用于在XML或HTML文档中定位节点的语言,广泛应用于网页爬虫、数据提取和文档处理。以下将从基础概念到高级技巧全面解析XPath。XPath是一种路径表达式语言,用于在XML/HTML文档中导航和选择节点。二、XPath路径表达式基础1.绝对路径与相对路径绝对路径:从根节点开始,用/分隔,例:/html/body/div#从H</div> </li> <li><a href="/article/1949792023003328512.htm" title="让 UniApp X “飞”起来:用 SSR 实现服务器端渲染,打造首屏秒开体验" target="_blank">让 UniApp X “飞”起来:用 SSR 实现服务器端渲染,打造首屏秒开体验</a> <span class="text-muted">脑袋大大的</span> <a class="tag" taget="_blank" href="/search/uniappx%E7%94%9F%E6%80%81%E4%B8%93%E6%A0%8F/1.htm">uniappx生态专栏</a><a class="tag" taget="_blank" href="/search/%E5%89%8D%E7%AB%AF/1.htm">前端</a><a class="tag" taget="_blank" href="/search/javascript/1.htm">javascript</a><a class="tag" taget="_blank" href="/search/vue.js/1.htm">vue.js</a><a class="tag" taget="_blank" href="/search/uniapp/1.htm">uniapp</a><a class="tag" taget="_blank" href="/search/uniappx/1.htm">uniappx</a> <div>你有没有遇到过这样的尴尬?用户打开你的UniApp项目,首屏白屏几秒钟,用户还没看到内容就走了。尤其是在SEO场景下,搜索引擎爬虫来了,你却只能返回一个“加载中…”的页面,结果自然是——被搜索引擎无情抛弃。但好消息是,从HBuilderX4.18版本起,UniAppX正式支持SSR(ServerSideRendering)服务器端渲染,这意味着你可以让你的UniApp应用“首屏即内容”,秒开页面、</div> </li> <li><a href="/article/1949757350831255552.htm" title="程序代码篇---python获取http界面上按钮或者数据输入" target="_blank">程序代码篇---python获取http界面上按钮或者数据输入</a> <span class="text-muted">Atticus-Orion</span> <a class="tag" taget="_blank" href="/search/%E7%A8%8B%E5%BA%8F%E4%BB%A3%E7%A0%81%E7%AF%87/1.htm">程序代码篇</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/http/1.htm">http</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a> <div>在Python中获取HTTP界面上的按钮点击或数据输入,主要有两种场景:作为客户端:模拟用户在网页上输入数据、点击按钮(比如爬虫自动提交表单)。作为服务端:搭建一个网页服务,接收用户在浏览器中输入的数据和按钮点击(比如自己写一个简单的Web应用)。下面分别用通俗易懂的方式讲解这两种场景的实现方法和代码。一、作为客户端:模拟用户操作网页(自动输入和点击)这种场景常用于自动化测试或数据爬取,需要模拟用</div> </li> <li><a href="/article/1949711201546072064.htm" title="selenium 反爬虫识别特征处理" target="_blank">selenium 反爬虫识别特征处理</a> <span class="text-muted"></span> <div>因为业务中发现网站对selenium特征识别为爬虫了,因此在搜索引擎中搜索进行处理方式一#实例化一个浏览器对象options=webdriver.ChromeOptions()options.add_experimental_option('excludeSwitches',['enable-automation'])ifsys.platform=="win32":browser=webdrive</div> </li> <li><a href="/article/1949706156138098688.htm" title="selenium之反反爬虫" target="_blank">selenium之反反爬虫</a> <span class="text-muted">无惧代码</span> <a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/selenium/1.htm">selenium</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a> <div>大多数情况下,检测的基本原理是检测当前浏览器窗口下的window.navigator对象是否包含webdriver这个属性。在正常使用浏览器的情况下,这个属性是undefined,然后一旦我们使用了selenium,这个属性就被初始化为true,很多网站就通过Javascript判断这个属性实现简单的反selenium爬虫。反反爬虫解决措施:fromseleniumimportwebdriverf</div> </li> <li><a href="/article/1949701863217623040.htm" title="爬虫入门(7)——反爬(3)Selenium" target="_blank">爬虫入门(7)——反爬(3)Selenium</a> <span class="text-muted">WHJ226</span> <a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB%E5%85%A5%E9%97%A8/1.htm">爬虫入门</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/selenium/1.htm">selenium</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a> <div>目录1Selenium定位方法1.1id定位1.2name定位1.3XPath定位1.4classname定位2模拟操作2.1模拟点击操作2.2模拟输入和搜索操作2.3模拟清除3控制浏览器操作3.1设置浏览器尺寸3.2控制浏览器后退和前进3.3刷新页面爬虫入门(6)——反爬(2)_WHJ226的博客-CSDN博客在该博客-CSDN博客博客中讲了动态渲染,Selenium安装,驱动器下载及配置,以及</div> </li> <li><a href="/article/1949630514302349312.htm" title="Python爬虫“折戟”真相大揭秘:数据获取失败全剖析" target="_blank">Python爬虫“折戟”真相大揭秘:数据获取失败全剖析</a> <span class="text-muted"></span> <div>爬虫数据获取:理想与现实的落差**在数据驱动的时代,数据宛如一座蕴藏无限价值的宝藏矿山,而Python爬虫则是我们深入矿山挖掘宝藏的得力工具。想象一下,你精心编写了一段Python爬虫代码,满心期待着它能像勤劳的矿工一样,源源不断地从网页中采集到你所需要的数据。当一切准备就绪,代码开始运行,那跳动的进度条仿佛是希望的脉搏。有时候现实却给我们泼了一盆冷水。原本期待着收获满满一桶数据,结果得到的却是寥</div> </li> <li><a href="/article/1949630514797277184.htm" title="Python爬虫打怪升级:数据获取疑难全解析" target="_blank">Python爬虫打怪升级:数据获取疑难全解析</a> <span class="text-muted">女码农的重启</span> <a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a> <div>一、引言**在大数据时代,数据就是价值的源泉。而Python爬虫,作为数据获取的得力助手,凭借Python简洁的语法和丰富强大的库,在众多领域发挥着重要作用。无论是电商领域的价格监测、市场调研中的数据收集,还是学术研究里的文献获取,Python爬虫都能大显身手。例如,通过爬取电商平台的商品信息,我们可以分析市场趋势,为企业决策提供有力支持;在学术研究中,利用爬虫获取大量文献资料,能帮助研究人员快速</div> </li> <li><a href="/article/65.htm" title="Java常用排序算法/程序员必须掌握的8大排序算法" target="_blank">Java常用排序算法/程序员必须掌握的8大排序算法</a> <span class="text-muted">cugfy</span> <a class="tag" taget="_blank" href="/search/java/1.htm">java</a> <div>分类: 1)插入排序(直接插入排序、希尔排序) 2)交换排序(冒泡排序、快速排序) 3)选择排序(直接选择排序、堆排序) 4)归并排序 5)分配排序(基数排序) 所需辅助空间最多:归并排序 所需辅助空间最少:堆排序 平均速度最快:快速排序 不稳定:快速排序,希尔排序,堆排序。 先来看看8种排序之间的关系:   1.直接插入排序 (1</div> </li> <li><a href="/article/192.htm" title="【Spark102】Spark存储模块BlockManager剖析" target="_blank">【Spark102】Spark存储模块BlockManager剖析</a> <span class="text-muted">bit1129</span> <a class="tag" taget="_blank" href="/search/manager/1.htm">manager</a> <div>Spark围绕着BlockManager构建了存储模块,包括RDD,Shuffle,Broadcast的存储都使用了BlockManager。而BlockManager在实现上是一个针对每个应用的Master/Executor结构,即Driver上BlockManager充当了Master角色,而各个Slave上(具体到应用范围,就是Executor)的BlockManager充当了Slave角色</div> </li> <li><a href="/article/319.htm" title="linux 查看端口被占用情况详解" target="_blank">linux 查看端口被占用情况详解</a> <span class="text-muted">daizj</span> <a class="tag" taget="_blank" href="/search/linux/1.htm">linux</a><a class="tag" taget="_blank" href="/search/%E7%AB%AF%E5%8F%A3%E5%8D%A0%E7%94%A8/1.htm">端口占用</a><a class="tag" taget="_blank" href="/search/netstat/1.htm">netstat</a><a class="tag" taget="_blank" href="/search/lsof/1.htm">lsof</a> <div>经常在启动一个程序会碰到端口被占用,这里讲一下怎么查看端口是否被占用,及哪个程序占用,怎么Kill掉已占用端口的程序   1、lsof -i:port port为端口号   [root@slave /data/spark-1.4.0-bin-cdh4]# lsof -i:8080 COMMAND   PID USER   FD   TY</div> </li> <li><a href="/article/446.htm" title="Hosts文件使用" target="_blank">Hosts文件使用</a> <span class="text-muted">周凡杨</span> <a class="tag" taget="_blank" href="/search/hosts/1.htm">hosts</a><a class="tag" taget="_blank" href="/search/locahost/1.htm">locahost</a> <div>     一切都要从localhost说起,经常在tomcat容器起动后,访问页面时输入http://localhost:8088/index.jsp,大家都知道localhost代表本机地址,如果本机IP是10.10.134.21,那就相当于http://10.10.134.21:8088/index.jsp,有时候也会看到http: 127.0.0.1:</div> </li> <li><a href="/article/573.htm" title="java excel工具" target="_blank">java excel工具</a> <span class="text-muted">g21121</span> <a class="tag" taget="_blank" href="/search/Java+excel/1.htm">Java excel</a> <div>直接上代码,一看就懂,利用的是jxl: import java.io.File; import java.io.IOException; import jxl.Cell; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import jxl.write.Label; import </div> </li> <li><a href="/article/700.htm" title="web报表工具finereport常用函数的用法总结(数组函数)" target="_blank">web报表工具finereport常用函数的用法总结(数组函数)</a> <span class="text-muted">老A不折腾</span> <a class="tag" taget="_blank" href="/search/finereport/1.htm">finereport</a><a class="tag" taget="_blank" href="/search/web%E6%8A%A5%E8%A1%A8/1.htm">web报表</a><a class="tag" taget="_blank" href="/search/%E5%87%BD%E6%95%B0%E6%80%BB%E7%BB%93/1.htm">函数总结</a> <div>ADD2ARRAY ADDARRAY(array,insertArray, start):在数组第start个位置插入insertArray中的所有元素,再返回该数组。 示例: ADDARRAY([3,4, 1, 5, 7], [23, 43, 22], 3)返回[3, 4, 23, 43, 22, 1, 5, 7]. ADDARRAY([3,4, 1, 5, 7], "测试&q</div> </li> <li><a href="/article/827.htm" title="游戏服务器网络带宽负载计算" target="_blank">游戏服务器网络带宽负载计算</a> <span class="text-muted">墙头上一根草</span> <a class="tag" taget="_blank" href="/search/%E6%9C%8D%E5%8A%A1%E5%99%A8/1.htm">服务器</a> <div>家庭所安装的4M,8M宽带。其中M是指,Mbits/S 其中要提前说明的是: 8bits = 1Byte 即8位等于1字节。我们硬盘大小50G。意思是50*1024M字节,约为 50000多字节。但是网宽是以“位”为单位的,所以,8Mbits就是1M字节。是容积体积的单位。 8Mbits/s后面的S是秒。8Mbits/s意思是 每秒8M位,即每秒1M字节。 我是在计算我们网络流量时想到的</div> </li> <li><a href="/article/954.htm" title="我的spring学习笔记2-IoC(反向控制 依赖注入)" target="_blank">我的spring学习笔记2-IoC(反向控制 依赖注入)</a> <span class="text-muted">aijuans</span> <a class="tag" taget="_blank" href="/search/Spring+3+%E7%B3%BB%E5%88%97/1.htm">Spring 3 系列</a> <div>IoC(反向控制 依赖注入)这是Spring提出来了,这也是Spring一大特色。这里我不用多说,我们看Spring教程就可以了解。当然我们不用Spring也可以用IoC,下面我将介绍不用Spring的IoC。 IoC不是框架,她是java的技术,如今大多数轻量级的容器都会用到IoC技术。这里我就用一个例子来说明: 如:程序中有 Mysql.calss 、Oracle.class 、SqlSe</div> </li> <li><a href="/article/1081.htm" title="高性能mysql 之 选择存储引擎(一)" target="_blank">高性能mysql 之 选择存储引擎(一)</a> <span class="text-muted">annan211</span> <a class="tag" taget="_blank" href="/search/mysql/1.htm">mysql</a><a class="tag" taget="_blank" href="/search/InnoDB/1.htm">InnoDB</a><a class="tag" taget="_blank" href="/search/MySQL%E5%BC%95%E6%93%8E/1.htm">MySQL引擎</a><a class="tag" taget="_blank" href="/search/%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E/1.htm">存储引擎</a> <div> 1 没有特殊情况,应尽可能使用InnoDB存储引擎。   原因:InnoDB 和 MYIsAM 是mysql 最常用、使用最普遍的存储引擎。其中InnoDB是最重要、最广泛的存储引擎。她   被设计用来处理大量的短期事务。短期事务大部分情况下是正常提交的,很少有回滚的情况。InnoDB的性能和自动崩溃   恢复特性使得她在非事务型存储的需求中也非常流行,除非有非常</div> </li> <li><a href="/article/1208.htm" title="UDP网络编程" target="_blank">UDP网络编程</a> <span class="text-muted">百合不是茶</span> <a class="tag" taget="_blank" href="/search/UDP%E7%BC%96%E7%A8%8B/1.htm">UDP编程</a><a class="tag" taget="_blank" href="/search/%E5%B1%80%E5%9F%9F%E7%BD%91%E7%BB%84%E6%92%AD/1.htm">局域网组播</a> <div>  UDP是基于无连接的,不可靠的传输   与TCP/IP相反       UDP实现私聊,发送方式客户端,接受方式服务器 package netUDP_sc; import java.net.DatagramPacket; import java.net.DatagramSocket; import java.net.Ine</div> </li> <li><a href="/article/1335.htm" title="JQuery对象的val()方法执行结果分析" target="_blank">JQuery对象的val()方法执行结果分析</a> <span class="text-muted">bijian1013</span> <a class="tag" taget="_blank" href="/search/JavaScript/1.htm">JavaScript</a><a class="tag" taget="_blank" href="/search/js/1.htm">js</a><a class="tag" taget="_blank" href="/search/jquery/1.htm">jquery</a> <div>        JavaScript中,如果id对应的标签不存在(同理JAVA中,如果对象不存在),则调用它的方法会报错或抛异常。在实际开发中,发现JQuery在id对应的标签不存在时,调其val()方法不会报错,结果是undefined。         </div> </li> <li><a href="/article/1462.htm" title="http请求测试实例(采用json-lib解析)" target="_blank">http请求测试实例(采用json-lib解析)</a> <span class="text-muted">bijian1013</span> <a class="tag" taget="_blank" href="/search/json/1.htm">json</a><a class="tag" taget="_blank" href="/search/http/1.htm">http</a> <div>        由于fastjson只支持JDK1.5版本,因些对于JDK1.4的项目,可以采用json-lib来解析JSON数据。如下是http请求的另外一种写法,仅供参考。 package com; import java.util.HashMap; import java.util.Map; import </div> </li> <li><a href="/article/1589.htm" title="【RPC框架Hessian四】Hessian与Spring集成" target="_blank">【RPC框架Hessian四】Hessian与Spring集成</a> <span class="text-muted">bit1129</span> <a class="tag" taget="_blank" href="/search/hessian/1.htm">hessian</a> <div>在【RPC框架Hessian二】Hessian 对象序列化和反序列化一文中介绍了基于Hessian的RPC服务的实现步骤,在那里使用Hessian提供的API完成基于Hessian的RPC服务开发和客户端调用,本文使用Spring对Hessian的集成来实现Hessian的RPC调用。   定义模型、接口和服务器端代码 |---Model    &nb</div> </li> <li><a href="/article/1716.htm" title="【Mahout三】基于Mahout CBayes算法的20newsgroup流程分析" target="_blank">【Mahout三】基于Mahout CBayes算法的20newsgroup流程分析</a> <span class="text-muted">bit1129</span> <a class="tag" taget="_blank" href="/search/Mahout/1.htm">Mahout</a> <div>1.Mahout环境搭建  1.下载Mahout http://mirror.bit.edu.cn/apache/mahout/0.10.0/mahout-distribution-0.10.0.tar.gz    2.解压Mahout  3. 配置环境变量 vim /etc/profile export HADOOP_HOME=/home</div> </li> <li><a href="/article/1843.htm" title="nginx负载tomcat遇非80时的转发问题" target="_blank">nginx负载tomcat遇非80时的转发问题</a> <span class="text-muted">ronin47</span> <div>  nginx负载后端容器是tomcat(其它容器如WAS,JBOSS暂没发现这个问题)非80端口,遇到跳转异常问题。解决的思路是:$host:port        详细如下:    该问题是最先发现的,由于之前对nginx不是特别的熟悉所以该问题是个入门级别的: ? 1 2 3 4 5 </div> </li> <li><a href="/article/1970.htm" title="java-17-在一个字符串中找到第一个只出现一次的字符" target="_blank">java-17-在一个字符串中找到第一个只出现一次的字符</a> <span class="text-muted">bylijinnan</span> <a class="tag" taget="_blank" href="/search/java/1.htm">java</a> <div> public class FirstShowOnlyOnceElement { /**Q17.在一个字符串中找到第一个只出现一次的字符。如输入abaccdeff,则输出b * 1.int[] count:count[i]表示i对应字符出现的次数 * 2.将26个英文字母映射:a-z <--> 0-25 * 3.假设全部字母都是小写 */ pu</div> </li> <li><a href="/article/2097.htm" title="mongoDB 复制集" target="_blank">mongoDB 复制集</a> <span class="text-muted">开窍的石头</span> <a class="tag" taget="_blank" href="/search/mongodb/1.htm">mongodb</a> <div>mongo的复制集就像mysql的主从数据库,当你往其中的主复制集(primary)写数据的时候,副复制集(secondary)会自动同步主复制集(Primary)的数据,当主复制集挂掉以后其中的一个副复制集会自动成为主复制集。提供服务器的可用性。和防止当机问题             mo</div> </li> <li><a href="/article/2224.htm" title="[宇宙与天文]宇宙时代的经济学" target="_blank">[宇宙与天文]宇宙时代的经济学</a> <span class="text-muted">comsci</span> <a class="tag" taget="_blank" href="/search/%E7%BB%8F%E6%B5%8E/1.htm">经济</a> <div>     宇宙尺度的交通工具一般都体型巨大,造价高昂。。。。。      在宇宙中进行航行,近程采用反作用力类型的发动机,需要消耗少量矿石燃料,中远程航行要采用量子或者聚变反应堆发动机,进行超空间跳跃,要消耗大量高纯度水晶体能源      以目前地球上国家的经济发展水平来讲,</div> </li> <li><a href="/article/2351.htm" title="Git忽略文件" target="_blank">Git忽略文件</a> <span class="text-muted">Cwind</span> <a class="tag" taget="_blank" href="/search/git/1.htm">git</a> <div>     有很多文件不必使用git管理。例如Eclipse或其他IDE生成的项目文件,编译生成的各种目标或临时文件等。使用git status时,会在Untracked files里面看到这些文件列表,在一次需要添加的文件比较多时(使用git add . / git add -u),会把这些所有的未跟踪文件添加进索引。 ==== ==== ==== 一些牢骚</div> </li> <li><a href="/article/2478.htm" title="MySQL连接数据库的必须配置" target="_blank">MySQL连接数据库的必须配置</a> <span class="text-muted">dashuaifu</span> <a class="tag" taget="_blank" href="/search/mysql/1.htm">mysql</a><a class="tag" taget="_blank" href="/search/%E8%BF%9E%E6%8E%A5%E6%95%B0%E6%8D%AE%E5%BA%93%E9%85%8D%E7%BD%AE/1.htm">连接数据库配置</a> <div>MySQL连接数据库的必须配置   1.driverClass:com.mysql.jdbc.Driver   2.jdbcUrl:jdbc:mysql://localhost:3306/dbname   3.user:username   4.password:password   其中1是驱动名;2是url,这里的‘dbna</div> </li> <li><a href="/article/2605.htm" title="一生要养成的60个习惯" target="_blank">一生要养成的60个习惯</a> <span class="text-muted">dcj3sjt126com</span> <a class="tag" taget="_blank" href="/search/%E4%B9%A0%E6%83%AF/1.htm">习惯</a> <div>一生要养成的60个习惯 第1篇 让你更受大家欢迎的习惯 1 守时,不准时赴约,让别人等,会失去很多机会。 如何做到: ①该起床时就起床, ②养成任何事情都提前15分钟的习惯。 ③带本可以随时阅读的书,如果早了就拿出来读读。 ④有条理,生活没条理最容易耽误时间。 ⑤提前计划:将重要和不重要的事情岔开。 ⑥今天就准备好明天要穿的衣服。 ⑦按时睡觉,这会让按时起床更容易。 2 注重</div> </li> <li><a href="/article/2732.htm" title="[介绍]Yii 是什么" target="_blank">[介绍]Yii 是什么</a> <span class="text-muted">dcj3sjt126com</span> <a class="tag" taget="_blank" href="/search/PHP/1.htm">PHP</a><a class="tag" taget="_blank" href="/search/yii2/1.htm">yii2</a> <div>Yii 是一个高性能,基于组件的 PHP 框架,用于快速开发现代 Web 应用程序。名字 Yii (读作 易)在中文里有“极致简单与不断演变”两重含义,也可看作 Yes It Is! 的缩写。 Yii 最适合做什么? Yii 是一个通用的 Web 编程框架,即可以用于开发各种用 PHP 构建的 Web 应用。因为基于组件的框架结构和设计精巧的缓存支持,它特别适合开发大型应</div> </li> <li><a href="/article/2859.htm" title="Linux SSH常用总结" target="_blank">Linux SSH常用总结</a> <span class="text-muted">eksliang</span> <a class="tag" taget="_blank" href="/search/linux+ssh/1.htm">linux ssh</a><a class="tag" taget="_blank" href="/search/SSHD/1.htm">SSHD</a> <div>转载请出自出处:http://eksliang.iteye.com/blog/2186931 一、连接到远程主机   格式: ssh name@remoteserver 例如: ssh ickes@192.168.27.211   二、连接到远程主机指定的端口   格式: ssh name@remoteserver -p 22 例如: ssh i</div> </li> <li><a href="/article/2986.htm" title="快速上传头像到服务端工具类FaceUtil" target="_blank">快速上传头像到服务端工具类FaceUtil</a> <span class="text-muted">gundumw100</span> <a class="tag" taget="_blank" href="/search/android/1.htm">android</a> <div>快速迭代用 import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOExceptio</div> </li> <li><a href="/article/3113.htm" title="jQuery入门之怎么使用" target="_blank">jQuery入门之怎么使用</a> <span class="text-muted">ini</span> <a class="tag" taget="_blank" href="/search/JavaScript/1.htm">JavaScript</a><a class="tag" taget="_blank" href="/search/html/1.htm">html</a><a class="tag" taget="_blank" href="/search/jquery/1.htm">jquery</a><a class="tag" taget="_blank" href="/search/Web/1.htm">Web</a><a class="tag" taget="_blank" href="/search/css/1.htm">css</a> <div>jQuery的强大我何问起(个人主页:hovertree.com)就不用多说了,那么怎么使用jQuery呢?   首先,下载jquery。下载地址:http://hovertree.com/hvtart/bjae/b8627323101a4994.htm,一个是压缩版本,一个是未压缩版本,如果在开发测试阶段,可以使用未压缩版本,实际应用一般使用压缩版本(min)。然后就在页面上引用。</div> </li> <li><a href="/article/3240.htm" title="带filter的hbase查询优化" target="_blank">带filter的hbase查询优化</a> <span class="text-muted">kane_xie</span> <a class="tag" taget="_blank" href="/search/%E6%9F%A5%E8%AF%A2%E4%BC%98%E5%8C%96/1.htm">查询优化</a><a class="tag" taget="_blank" href="/search/hbase/1.htm">hbase</a><a class="tag" taget="_blank" href="/search/RandomRowFilter/1.htm">RandomRowFilter</a> <div> 问题描述 hbase scan数据缓慢,server端出现LeaseException。hbase写入缓慢。   问题原因 直接原因是: hbase client端每次和regionserver交互的时候,都会在服务器端生成一个Lease,Lease的有效期由参数hbase.regionserver.lease.period确定。如果hbase scan需</div> </li> <li><a href="/article/3367.htm" title="java设计模式-单例模式" target="_blank">java设计模式-单例模式</a> <span class="text-muted">men4661273</span> <a class="tag" taget="_blank" href="/search/java/1.htm">java</a><a class="tag" taget="_blank" href="/search/%E5%8D%95%E4%BE%8B/1.htm">单例</a><a class="tag" taget="_blank" href="/search/%E6%9E%9A%E4%B8%BE/1.htm">枚举</a><a class="tag" taget="_blank" href="/search/%E5%8F%8D%E5%B0%84/1.htm">反射</a><a class="tag" taget="_blank" href="/search/IOC/1.htm">IOC</a> <div>         单例模式1,饿汉模式 //饿汉式单例类.在类初始化时,已经自行实例化 public class Singleton1 { //私有的默认构造函数 private Singleton1() {} //已经自行实例化 private static final Singleton1 singl</div> </li> <li><a href="/article/3494.htm" title="mongodb 查询某一天所有信息的3种方法,根据日期查询" target="_blank">mongodb 查询某一天所有信息的3种方法,根据日期查询</a> <span class="text-muted">qiaolevip</span> <a class="tag" taget="_blank" href="/search/%E6%AF%8F%E5%A4%A9%E8%BF%9B%E6%AD%A5%E4%B8%80%E7%82%B9%E7%82%B9/1.htm">每天进步一点点</a><a class="tag" taget="_blank" href="/search/%E5%AD%A6%E4%B9%A0%E6%B0%B8%E6%97%A0%E6%AD%A2%E5%A2%83/1.htm">学习永无止境</a><a class="tag" taget="_blank" href="/search/mongodb/1.htm">mongodb</a><a class="tag" taget="_blank" href="/search/%E7%BA%B5%E8%A7%82%E5%8D%83%E8%B1%A1/1.htm">纵观千象</a> <div>// mongodb的查询真让人难以琢磨,就查询单天信息,都需要花费一番功夫才行。 // 第一种方式: coll.aggregate([ {$project:{sendDate: {$substr: ['$sendTime', 0, 10]}, sendTime: 1, content:1}}, {$match:{sendDate: '2015-</div> </li> <li><a href="/article/3621.htm" title="二维数组转换成JSON" target="_blank">二维数组转换成JSON</a> <span class="text-muted">tangqi609567707</span> <a class="tag" taget="_blank" href="/search/java/1.htm">java</a><a class="tag" taget="_blank" href="/search/%E4%BA%8C%E7%BB%B4%E6%95%B0%E7%BB%84/1.htm">二维数组</a><a class="tag" taget="_blank" href="/search/json/1.htm">json</a> <div>原文出处:http://blog.csdn.net/springsen/article/details/7833596 public class Demo {     public static void main(String[] args) {        String[][] blogL</div> </li> <li><a href="/article/3748.htm" title="erlang supervisor" target="_blank">erlang supervisor</a> <span class="text-muted">wudixiaotie</span> <a class="tag" taget="_blank" href="/search/erlang/1.htm">erlang</a> <div>定义supervisor时,如果是监控celuesimple_one_for_one则删除children的时候就用supervisor:terminate_child (SupModuleName, ChildPid),如果shutdown策略选择的是brutal_kill,那么supervisor会调用exit(ChildPid, kill),这样的话如果Child的behavior是gen_</div> </li> </ul> </div> </div> </div> <div> <div class="container"> <div class="indexes"> <strong>按字母分类:</strong> <a href="/tags/A/1.htm" target="_blank">A</a><a href="/tags/B/1.htm" target="_blank">B</a><a href="/tags/C/1.htm" target="_blank">C</a><a href="/tags/D/1.htm" target="_blank">D</a><a href="/tags/E/1.htm" target="_blank">E</a><a href="/tags/F/1.htm" target="_blank">F</a><a href="/tags/G/1.htm" target="_blank">G</a><a href="/tags/H/1.htm" target="_blank">H</a><a href="/tags/I/1.htm" target="_blank">I</a><a href="/tags/J/1.htm" target="_blank">J</a><a href="/tags/K/1.htm" target="_blank">K</a><a href="/tags/L/1.htm" target="_blank">L</a><a href="/tags/M/1.htm" target="_blank">M</a><a href="/tags/N/1.htm" target="_blank">N</a><a href="/tags/O/1.htm" target="_blank">O</a><a href="/tags/P/1.htm" target="_blank">P</a><a href="/tags/Q/1.htm" target="_blank">Q</a><a href="/tags/R/1.htm" target="_blank">R</a><a href="/tags/S/1.htm" target="_blank">S</a><a href="/tags/T/1.htm" target="_blank">T</a><a href="/tags/U/1.htm" target="_blank">U</a><a href="/tags/V/1.htm" target="_blank">V</a><a href="/tags/W/1.htm" target="_blank">W</a><a href="/tags/X/1.htm" target="_blank">X</a><a href="/tags/Y/1.htm" target="_blank">Y</a><a href="/tags/Z/1.htm" target="_blank">Z</a><a href="/tags/0/1.htm" target="_blank">其他</a> </div> </div> </div> <footer id="footer" class="mb30 mt30"> <div class="container"> <div class="footBglm"> <a target="_blank" href="/">首页</a> - <a target="_blank" href="/custom/about.htm">关于我们</a> - <a target="_blank" href="/search/Java/1.htm">站内搜索</a> - <a target="_blank" href="/sitemap.txt">Sitemap</a> - <a target="_blank" href="/custom/delete.htm">侵权投诉</a> </div> <div class="copyright">版权所有 IT知识库 CopyRight © 2000-2050 E-COM-NET.COM , All Rights Reserved. <!-- <a href="https://beian.miit.gov.cn/" rel="nofollow" target="_blank">京ICP备09083238号</a><br>--> </div> </div> </footer> <!-- 代码高亮 --> <script type="text/javascript" src="/static/syntaxhighlighter/scripts/shCore.js"></script> <script type="text/javascript" src="/static/syntaxhighlighter/scripts/shLegacy.js"></script> <script type="text/javascript" src="/static/syntaxhighlighter/scripts/shAutoloader.js"></script> <link type="text/css" rel="stylesheet" href="/static/syntaxhighlighter/styles/shCoreDefault.css"/> <script type="text/javascript" src="/static/syntaxhighlighter/src/my_start_1.js"></script> </body> </html>