用 Python 写你的第一个爬虫:小白也能轻松搞定数据抓取(超详细包含最新所有Python爬虫库的教程)

用 Python 写你的第一个爬虫:小白也能轻松搞定数据抓取(超详细包含最新所有Python爬虫库的教程)

摘要

本文是一篇面向爬虫爱好者的超详细 Python 爬虫入门教程,涵盖了从基础到进阶的所有关键技术点:使用 Requests 与 BeautifulSoup 实现静态网页数据抓取,运用 lxml、XPath、CSS 选择器等高效解析技术,深入 Scrapy 框架搭建分布式爬虫项目,掌握 Selenium 和 Playwright 浏览器自动化处理 JS 动态渲染,探索 aiohttp、HTTPX 异步爬虫提升并发性能,并结合代理 IP 池、User-Agent 伪装、验证码识别等反爬虫策略应对电商数据抓取、新闻数据爬取、社交媒体采集等场景。快速上手大规模爬虫项目,打造可扩展、高效稳定的数据抓取解决方案。


用 Python 写你的第一个爬虫:小白也能轻松搞定数据抓取(超详细包含最新所有Python爬虫库的教程)_第1张图片

目录

  1. 前言

  2. 爬虫基础知识

    • 2.1 什么是爬虫?
    • 2.2 爬虫的应用场景
    • 2.3 爬虫基本流程
    • 2.4 需要注意的法律与伦理问题
  3. 开发环境准备

    • 3.1 安装 Python(建议 3.8 及以上)
    • 3.2 创建虚拟环境并激活
    • 3.3 常用开发工具推荐
  4. 基础篇:用 Requests + BeautifulSoup 做简单爬虫

    • 4.1 安装必要库
    • 4.2 认识 HTTP 请求与响应
    • 4.3 编写第一个爬虫:抓取网页标题
    • 4.4 解析HTML:BeautifulSoup 用法详解
    • 4.5 文件存储:将抓到的数据保存为 CSV/JSON
    • 4.6 常见反爬措施及应对策略
  5. 进阶篇:更强大的解析工具

    • 5.1 lxml (XPath)
    • 5.2 parsel(Scrapy 内置的解析器)
    • 5.3 PyQuery(类似 jQuery 的解析方式)
    • 5.4 正则表达式在爬虫中的应用
  6. 框架篇:Scrapy 全面入门

    • 6.1 Scrapy 简介
    • 6.2 安装与项目结构
    • 6.3 编写第一个 Scrapy 爬虫 Spider
    • 6.4 Item、Pipeline、Settings 详解
    • 6.5 Scrapy Shell 在线调试
    • 6.6 分布式与多线程:Scrapy 爬虫并发配置
    • 6.7 Scrapy 中间件与扩展(Downloader Middleware、Downloader Handler)
  7. 动态内容爬取:Selenium 与 Playwright

    • 7.1 为什么需要浏览器自动化?
    • 7.2 Selenium 基础用法
    • 7.3 Playwright for Python(更快更轻量)
    • 7.4 无头浏览器(headless)模式及性能优化
    • 7.5 结合 Selenium/Playwright 与 BeautifulSoup 解析
  8. 异步爬虫:aiohttp + asyncio 与 HTTPX

    • 8.1 同步 vs 异步:性能原理简述
    • 8.2 aiohttp 入门示例
    • 8.3 使用 asyncio 协程池提高并发
    • 8.4 HTTPX:Requests 的异步升级版
    • 8.5 异步下使用解析库示例(aiohttp + lxml)
  9. 数据存储与去重

    • 9.1 本地文件:CSV、JSON、SQLite
    • 9.2 MySQL/PostgreSQL 等关系型数据库
    • 9.3 MongoDB 等 NoSQL 存储
    • 9.4 Redis 用作去重与短期缓存
    • 9.5 去重策略:指纹、哈希、Bloom Filter
  10. 分布式爬虫:Scrapy-Redis 与分布式调度

    • 10.1 为什么要做分布式?
    • 10.2 Scrapy-Redis 简介与安装
    • 10.3 分布式去重队列与调度
    • 10.4 多机协作示例
  11. 常见反爬与反制策略

    • 11.1 频率限制与请求头伪装
    • 11.2 登录验证与 Cookie 管理
    • 11.3 验证码识别(简单介绍)
    • 11.4 代理 IP 池的搭建与旋转
  12. 完整案例:爬取某新闻网站并存入数据库

    • 12.1 需求分析
    • 12.2 使用 Scrapy + MySQL 完整实现
    • 12.3 代码详解与常见 Q&A
  13. Python 爬虫相关的常用第三方库一览(截至 2024 年底)

    • 13.1 基础请求与解析
    • 13.2 浏览器自动化
    • 13.3 异步爬取
    • 13.4 登录模拟与验证码处理
    • 13.5 反爬与代理
    • 13.6 分布式调度
    • 13.7 其它有用工具
  14. 附录

    • 14.1 常见报错及解决方案
    • 14.2 常用 HTTP 状态码速查
    • 14.3 学习资源与进阶指南
  15. 总结


1. 前言

在信息爆炸的时代,互联网早已成为最丰富、最便捷的数据来源。从电商平台的商品价格到新闻网站的最新动态,从社交媒体的热门话题到招聘网站的职位信息,只要你想得到,几乎都能通过爬虫从网页里“扒”出来。对于初学者而言,爬虫其实并不神秘:只要理解 HTTP、HTML 及基本的 Python 编程,就能快速入门。本教程面向“零基础”“小白”用户,讲解从最基本的抓取到进阶框架、异步、分布式再到反爬策略,逐步深入,手把手指导你搭建完整爬虫,并总结截至 2025 年最常用的 Python 爬虫库。

本教程特色

  • 循序渐进:从最简单的 requests + BeautifulSoup 开始,到 Scrapy、Selenium、Playwright、异步爬虫,一步步掌握。
  • 超详细示例:每个工具/框架都配有完整可运行的示例代码,你可以直接复制、运行、观察。
  • 最新库盘点:整理并介绍了截至 2025 年所见的常用爬虫生态中的主流库,助你选对最合适的工具。
  • 反爬与实战:从简单的 User-Agent 伪装到代理 IP 池、验证码识别、分布式部署,多角度应对目标网站的各种反爬机制。

温馨提示

  1. 本教程示例均基于 Python 3.8+,强烈建议使用 Python 3.10 或更高版本来获得更好的兼容性与性能。
  2. 爬取网站数据时,请务必遵守目标网站的 robots.txt 以及相关法律法规,避免给他人服务器带来不必要的压力。
  3. 本文所列“最新库”信息截止到 2024 年底,2025 年及以后的新库、新特性请结合官方文档或社区资源进行补充。

2. 爬虫基础知识

2.1 什么是爬虫?

  • 定义:爬虫(Web Crawler,也称 Spider、Bot)是一种通过程序自动访问网页,并将其中有用信息提取下来存储的数据采集工具。
  • 原理简述:爬虫首先向指定 URL 发起 HTTP 请求,获取网页源代码(HTML、JSON、图片等),再通过解析技术(如 XPath、CSS 选择器、正则)从源码中提取所需数据,最后将数据保存到文件或数据库中。

2.2 爬虫的应用场景

  1. 数据分析:电商价格监控、商品评论分析、竞品调研。
  2. 舆情监控:社交媒体热搜、论坛帖子、新闻资讯统计。
  3. 搜索引擎:Google、Bing、Baidu 等搜索引擎通过爬虫定期抓取网页进行索引。
  4. 招聘信息采集:自动抓取招聘网站的岗位、薪资、公司信息。
  5. 学术研究:论文元数据爬取、知识图谱构建等。
  6. 内容聚合:如各类聚合网站把分散站点的文章集中到一个平台。

2.3 爬虫基本流程

  1. 确定目标 URL:明确要爬取的网页地址,可能是静态页面,也可能是动态加载。
  2. 发送 HTTP 请求:通常使用 requestshttpxaiohttp 等库向目标 URL 发送 GET、POST 请求,并获取响应。
  3. 解析响应内容:响应可能是 HTML、JSON、XML、图片等,常用解析工具有 BeautifulSoup、lxml、parsel、PyQuery、正则表达式等。
  4. 提取数据:根据标签名、属性、XPath、CSS Selector 等定位到目标内容,抽取文本或属性。
  5. 数据处理与存储:将提取到的内容清洗、去重,然后保存到 CSV、JSON、SQLite、MySQL、MongoDB 等介质中。
  6. 翻页/递归:如果需要多个页面的数据,就要分析翻页逻辑(URL 模板、Ajax 请求),循环执行请求与解析。
  7. 异常处理与反爬对策:设置代理、随机 User-Agent、限速、IP 轮换,处理 HTTP 403、验证码、重定向等。

2.4 需要注意的法律与伦理问题

  • 请求前务必查看目标站点的 robots.txt(通常在 https://example.com/robots.txt),遵从抓取规则;
  • 有些站点禁止大量抓取、禁止商业用途,在爬取前请阅读并遵守版权与隐私政策;
  • 不要对目标站点造成过大压力,建议设置合适的延时(time.sleep)、并发数限制;
  • 遵守爬虫与爬取数据后续处理相关法律法规,切勿用于违法用途。

3. 开发环境准备

3.1 安装 Python(建议 3.8 及以上)

  1. Windows

    • 前往 https://www.python.org/downloads 下载对应 3.8+ 的安装包,默认选中“Add Python 3.x to PATH”,点击“Install Now”。

    • 安装完成后,打开命令行(Win + R → 输入 cmd → 回车),执行:

      python --version
      pip --version
      

      确认 Python 与 pip 已成功安装。

  2. macOS

    • 建议使用 Homebrew 安装:

      brew install [email protected]
      
    • 安装完成后,执行:

      python3 --version
      pip3 --version
      

      确认无误后即可。

  3. Linux (Ubuntu/Debian 系)

    sudo apt update
    sudo apt install python3 python3-pip python3-venv -y
    

    执行:

    python3 --version
    pip3 --version
    

    即可确认。

提示:如果你机器上同时安装了 Python 2.x 和 Python 3.x,可能需要使用 python3pip3 来替代 pythonpip

3.2 创建虚拟环境并激活

为了避免全局依赖冲突,强烈建议为每个爬虫项目创建独立的虚拟环境:

# 进入项目根目录
mkdir my_spider && cd my_spider

# 在项目目录下创建虚拟环境(python3 -m venv venv 或 python -m venv venv)
python3 -m venv venv

# 激活虚拟环境
# Windows:
venv\Scripts\activate

# macOS/Linux:
source venv/bin/activate

激活后,终端左侧会显示 (venv),此时安装的所有包都只作用于该环境。

3.3 常用开发工具推荐

  • IDE/编辑器

    • PyCharm Community / Professional:功能强大,集成测试、版本管理。
    • VS Code:轻量且插件丰富,适合快速编辑。
    • Sublime Text:轻量,启动快;对于小脚本很方便。
  • 调试工具

    • VS Code/PyCharm 自带的调试器,可以单步、断点调试。
    • 对于命令行脚本,也可以使用 pdb
  • 版本管理

    • Git + VS Code / PyCharm Git 插件,实现代码托管与协作。
    • 将项目托管到 GitHub/Gitee 等。
  • 其他辅助

    • Postman / Insomnia:用于模拟 HTTP 请求、查看响应头;
    • Charles / Fiddler:抓包工具,可调试 AJAX 请求、Cookie、headers 等。

4. 基础篇:用 Requests + BeautifulSoup 做简单爬虫

4.1 安装必要库

在虚拟环境中,执行:

pip install requests beautifulsoup4 lxml
  • requests:Python 最常用的 HTTP 库,用于发送 GET/POST 请求。
  • beautifulsoup4:常见的 HTML/XML 解析库,入门简单。
  • lxml:速度快、功能强大的解析器,供 BeautifulSoup 使用。

4.2 认识 HTTP 请求与响应

  • HTTP 请求:由方法(GET、POST、PUT 等)、URL、请求头(Headers)、请求体(Body)等组成。

  • HTTP 响应:包含状态码(200、404、500 等)、响应头、响应体(通常为 HTML、JSON、图片、文件等)。

  • Requests 常用参数

    • url:请求地址。
    • params:URL 参数(字典/字符串)。
    • headers:自定义请求头(例如 User-Agent、Referer、Cookie)。
    • data / json:POST 请求时发送的表单或 JSON 数据。
    • timeout:超时时间(秒),防止请求一直卡住。
    • proxies:配置代理(详见后文)。

示例:

import requests

url = 'https://httpbin.org/get'
params = {'q': 'python 爬虫', 'page': 1}
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ...'
}

response = requests.get(url, params=params, headers=headers, timeout=10)
print(response.status_code)        # 打印状态码,例如 200
print(response.encoding)          # 编码,例如 'utf-8'
print(response.text[:200])        # 前 200 字符

4.3 编写第一个爬虫:抓取网页标题

下面以爬取「https://www.example.com」网页标题为例,演示最简单的流程:

# file: simple_spider.py

import requests
from bs4 import BeautifulSoup

def fetch_title(url):
    try:
        # 1. 发送 GET 请求
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ...'
        }
        response = requests.get(url, headers=headers, timeout=10)
        response.raise_for_status()  # 如果状态码不是 200,引发 HTTPError

        # 2. 设置正确的编码
        response.encoding = response.apparent_encoding

        # 3. 解析 HTML
        soup = BeautifulSoup(response.text, 'lxml')

        # 4. 提取  标签内容</span>
        title_tag <span class="token operator">=</span> soup<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'title'</span><span class="token punctuation">)</span>
        <span class="token keyword">if</span> title_tag<span class="token punctuation">:</span>
            <span class="token keyword">return</span> title_tag<span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>strip<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">else</span><span class="token punctuation">:</span>
            <span class="token keyword">return</span> <span class="token string">'未找到 title 标签'</span>
    <span class="token keyword">except</span> Exception <span class="token keyword">as</span> e<span class="token punctuation">:</span>
        <span class="token keyword">return</span> <span class="token string-interpolation"><span class="token string">f'抓取失败:</span><span class="token interpolation"><span class="token punctuation">{</span>e<span class="token punctuation">}</span></span><span class="token string">'</span></span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    url <span class="token operator">=</span> <span class="token string">'https://www.example.com'</span>
    title <span class="token operator">=</span> fetch_title<span class="token punctuation">(</span>url<span class="token punctuation">)</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'网页标题:</span><span class="token interpolation"><span class="token punctuation">{</span>title<span class="token punctuation">}</span></span><span class="token string">'</span></span><span class="token punctuation">)</span>
</code></pre> 
  <p><strong>运行结果示例</strong>:</p> 
  <pre><code class="prism language-bash"><span class="token punctuation">(</span>venv<span class="token punctuation">)</span> $ python simple_spider.py
网页标题:Example Domain
</code></pre> 
  <h4>4.4 解析HTML:BeautifulSoup 用法详解</h4> 
  <p><code>BeautifulSoup</code> 库使用简单,常用方法如下:</p> 
  <ol> 
   <li> <p><strong>创建对象</strong></p> <pre><code class="prism language-python">soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html_text<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>  <span class="token comment"># 或 'html.parser'</span>
</code></pre> </li> 
   <li> <p><strong>查找单个节点</strong></p> 
    <ul> 
     <li><code>soup.find(tag_name, attrs={}, recursive=True, text=None, **kwargs)</code></li> 
     <li>示例:<code>soup.find('div', class_='content')</code></li> 
     <li>可以使用 <code>attrs={'class': 'foo', 'id': 'bar'}</code> 精确定位。</li> 
    </ul> </li> 
   <li> <p><strong>查找所有节点</strong></p> 
    <ul> 
     <li><code>soup.find_all(tag_name, attrs={}, limit=None, **kwargs)</code></li> 
     <li>示例:<code>soup.find_all('a', href=True)</code> 返回所有带 <code>href</code> 的链接。</li> 
    </ul> </li> 
   <li> <p><strong>CSS 选择器</strong></p> 
    <ul> 
     <li><code>soup.select('div.content > ul li a')</code>,返回列表。</li> 
     <li>支持 id(<code>#id</code>)、class(<code>.class</code>)、属性(<code>[attr=value]</code>)等。</li> 
    </ul> </li> 
   <li> <p><strong>获取属性或文本</strong></p> 
    <ul> 
     <li><code>node.get('href')</code>:拿属性值;</li> 
     <li><code>node['href']</code>:同上,但如果属性不存在会抛异常;</li> 
     <li><code>node.get_text(strip=True)</code>:获取节点文本,并去除前后空白;</li> 
     <li><code>node.text</code>:获取节点及子节点合并文本。</li> 
    </ul> </li> 
   <li> <p><strong>常用属性</strong></p> 
    <ul> 
     <li><code>soup.title</code> / <code>soup.title.string</code> / <code>soup.title.text</code></li> 
     <li><code>soup.body</code> / <code>soup.head</code> / <code>soup.a</code> / <code>soup.div</code> 等快捷属性。</li> 
    </ul> </li> 
   <li> <p><strong>示例:提取列表页所有文章链接</strong></p> <pre><code class="prism language-python">html <span class="token operator">=</span> response<span class="token punctuation">.</span>text
soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
<span class="token comment"># 假设每篇文章链接都在 <h2 class="post-title"><a href="...">...</a></h2></span>
<span class="token keyword">for</span> h2 <span class="token keyword">in</span> soup<span class="token punctuation">.</span>find_all<span class="token punctuation">(</span><span class="token string">'h2'</span><span class="token punctuation">,</span> class_<span class="token operator">=</span><span class="token string">'post-title'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    a_tag <span class="token operator">=</span> h2<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'a'</span><span class="token punctuation">)</span>
    title <span class="token operator">=</span> a_tag<span class="token punctuation">.</span>get_text<span class="token punctuation">(</span>strip<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
    link <span class="token operator">=</span> a_tag<span class="token punctuation">[</span><span class="token string">'href'</span><span class="token punctuation">]</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span>title<span class="token punctuation">,</span> link<span class="token punctuation">)</span>
</code></pre> </li> 
  </ol> 
  <h4>4.5 文件存储:将抓到的数据保存为 CSV/JSON</h4> 
  <ol> 
   <li> <p><strong>CSV 格式</strong></p> <pre><code class="prism language-python"><span class="token keyword">import</span> csv

data <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token punctuation">{</span><span class="token string">'title'</span><span class="token punctuation">:</span> <span class="token string">'第一篇'</span><span class="token punctuation">,</span> <span class="token string">'url'</span><span class="token punctuation">:</span> <span class="token string">'https://...'</span><span class="token punctuation">}</span><span class="token punctuation">,</span>
    <span class="token punctuation">{</span><span class="token string">'title'</span><span class="token punctuation">:</span> <span class="token string">'第二篇'</span><span class="token punctuation">,</span> <span class="token string">'url'</span><span class="token punctuation">:</span> <span class="token string">'https://...'</span><span class="token punctuation">}</span><span class="token punctuation">,</span>
    <span class="token comment"># ...</span>
<span class="token punctuation">]</span>

<span class="token keyword">with</span> <span class="token builtin">open</span><span class="token punctuation">(</span><span class="token string">'result.csv'</span><span class="token punctuation">,</span> mode<span class="token operator">=</span><span class="token string">'w'</span><span class="token punctuation">,</span> newline<span class="token operator">=</span><span class="token string">''</span><span class="token punctuation">,</span> encoding<span class="token operator">=</span><span class="token string">'utf-8-sig'</span><span class="token punctuation">)</span> <span class="token keyword">as</span> f<span class="token punctuation">:</span>
    fieldnames <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'title'</span><span class="token punctuation">,</span> <span class="token string">'url'</span><span class="token punctuation">]</span>
    writer <span class="token operator">=</span> csv<span class="token punctuation">.</span>DictWriter<span class="token punctuation">(</span>f<span class="token punctuation">,</span> fieldnames<span class="token operator">=</span>fieldnames<span class="token punctuation">)</span>
    writer<span class="token punctuation">.</span>writeheader<span class="token punctuation">(</span><span class="token punctuation">)</span>
    <span class="token keyword">for</span> item <span class="token keyword">in</span> data<span class="token punctuation">:</span>
        writer<span class="token punctuation">.</span>writerow<span class="token punctuation">(</span>item<span class="token punctuation">)</span>
</code></pre> 
    <ul> 
     <li><code>encoding='utf-8-sig'</code> 能兼容 Excel 打开时不出现乱码。</li> 
    </ul> </li> 
   <li> <p><strong>JSON 格式</strong></p> <pre><code class="prism language-python"><span class="token keyword">import</span> json

data <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token punctuation">{</span><span class="token string">'title'</span><span class="token punctuation">:</span> <span class="token string">'第一篇'</span><span class="token punctuation">,</span> <span class="token string">'url'</span><span class="token punctuation">:</span> <span class="token string">'https://...'</span><span class="token punctuation">}</span><span class="token punctuation">,</span>
    <span class="token punctuation">{</span><span class="token string">'title'</span><span class="token punctuation">:</span> <span class="token string">'第二篇'</span><span class="token punctuation">,</span> <span class="token string">'url'</span><span class="token punctuation">:</span> <span class="token string">'https://...'</span><span class="token punctuation">}</span><span class="token punctuation">,</span>
    <span class="token comment"># ...</span>
<span class="token punctuation">]</span>

<span class="token keyword">with</span> <span class="token builtin">open</span><span class="token punctuation">(</span><span class="token string">'result.json'</span><span class="token punctuation">,</span> <span class="token string">'w'</span><span class="token punctuation">,</span> encoding<span class="token operator">=</span><span class="token string">'utf-8'</span><span class="token punctuation">)</span> <span class="token keyword">as</span> f<span class="token punctuation">:</span>
    json<span class="token punctuation">.</span>dump<span class="token punctuation">(</span>data<span class="token punctuation">,</span> f<span class="token punctuation">,</span> ensure_ascii<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">,</span> indent<span class="token operator">=</span><span class="token number">4</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>SQLite 存储</strong>(适合小规模项目)</p> <pre><code class="prism language-python"><span class="token keyword">import</span> sqlite3

conn <span class="token operator">=</span> sqlite3<span class="token punctuation">.</span>connect<span class="token punctuation">(</span><span class="token string">'spider.db'</span><span class="token punctuation">)</span>
cursor <span class="token operator">=</span> conn<span class="token punctuation">.</span>cursor<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token comment"># 创建表(如果不存在)</span>
cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span><span class="token triple-quoted-string string">'''
    CREATE TABLE IF NOT EXISTS articles (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        title TEXT,
        url TEXT UNIQUE
    );
'''</span><span class="token punctuation">)</span>
<span class="token comment"># 插入数据</span>
items <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token punctuation">(</span><span class="token string">'第一篇'</span><span class="token punctuation">,</span> <span class="token string">'https://...'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
    <span class="token punctuation">(</span><span class="token string">'第二篇'</span><span class="token punctuation">,</span> <span class="token string">'https://...'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
<span class="token punctuation">]</span>
<span class="token keyword">for</span> title<span class="token punctuation">,</span> url <span class="token keyword">in</span> items<span class="token punctuation">:</span>
    <span class="token keyword">try</span><span class="token punctuation">:</span>
        cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span><span class="token string">'INSERT INTO articles (title, url) VALUES (?, ?)'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span>title<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">)</span>
    <span class="token keyword">except</span> sqlite3<span class="token punctuation">.</span>IntegrityError<span class="token punctuation">:</span>
        <span class="token keyword">pass</span>  <span class="token comment"># URL 已存在就跳过</span>
conn<span class="token punctuation">.</span>commit<span class="token punctuation">(</span><span class="token punctuation">)</span>
conn<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
  </ol> 
  <h4>4.6 常见反爬措施及应对策略</h4> 
  <ol> 
   <li> <p><strong>User-Agent 检测</strong></p> 
    <ul> 
     <li>默认 <code>requests</code> 的 User-Agent 大多被识别为“爬虫”,容易被屏蔽。</li> 
     <li>应用:在请求头中随机选用常见浏览器 User-Agent。</li> 
    </ul> <pre><code class="prism language-python"><span class="token keyword">import</span> random

USER_AGENTS <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token string">'Mozilla/5.0 ... Chrome/100.0.4896.127 ...'</span><span class="token punctuation">,</span>
    <span class="token string">'Mozilla/5.0 ... Firefox/110.0 ...'</span><span class="token punctuation">,</span>
    <span class="token string">'Mozilla/5.0 ... Safari/605.1.15 ...'</span><span class="token punctuation">,</span>
    <span class="token comment"># 更多可从网上获取</span>
<span class="token punctuation">]</span>
headers <span class="token operator">=</span> <span class="token punctuation">{</span><span class="token string">'User-Agent'</span><span class="token punctuation">:</span> random<span class="token punctuation">.</span>choice<span class="token punctuation">(</span>USER_AGENTS<span class="token punctuation">)</span><span class="token punctuation">}</span>
response <span class="token operator">=</span> requests<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> headers<span class="token operator">=</span>headers<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>IP 限制</strong></p> 
    <ul> 
     <li>如果同一 IP 在短时间内发起大量请求,服务器可能会封禁或返回 403。</li> 
     <li>应对:使用代理池(详见第 11 节),定期更换 IP。</li> 
    </ul> </li> 
   <li> <p><strong>Cookie 验证</strong></p> 
    <ul> 
     <li>某些网站登录后才能访问完整内容,需要先模拟登录获取 Cookie,再在后续请求中带上。</li> 
     <li>用 <code>requests.Session()</code> 管理会话,同一 Session 自动保存并发送 Cookie。</li> 
    </ul> <pre><code class="prism language-python"><span class="token keyword">import</span> requests

session <span class="token operator">=</span> requests<span class="token punctuation">.</span>Session<span class="token punctuation">(</span><span class="token punctuation">)</span>
login_data <span class="token operator">=</span> <span class="token punctuation">{</span><span class="token string">'username'</span><span class="token punctuation">:</span> <span class="token string">'xxx'</span><span class="token punctuation">,</span> <span class="token string">'password'</span><span class="token punctuation">:</span> <span class="token string">'xxx'</span><span class="token punctuation">}</span>
session<span class="token punctuation">.</span>post<span class="token punctuation">(</span><span class="token string">'https://example.com/login'</span><span class="token punctuation">,</span> data<span class="token operator">=</span>login_data<span class="token punctuation">)</span>
<span class="token comment"># 登录成功后,session 自动保存了 Cookie</span>
response <span class="token operator">=</span> session<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://example.com/protected-page'</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>验证码</strong></p> 
    <ul> 
     <li>简易验证码有时可通过 OCR 自动识别,但复杂图片验证码需要专门打码平台或人工识别。</li> 
     <li>在入门阶段,尽量选择不需要验证码或抢先获取 API。</li> 
    </ul> </li> 
   <li> <p><strong>AJAX / 动态渲染</strong></p> 
    <ul> 
     <li>如果页面数据是通过 JavaScript 动态加载,直接用 <code>requests</code> 只能获取静态 HTML。</li> 
     <li>应用:可分析 AJAX 请求接口(Network 面板),直接请求接口返回的 JSON;或使用浏览器自动化工具(Selenium/Playwright)模拟浏览器渲染。</li> 
    </ul> </li> 
  </ol> 
  <hr> 
  <p></p> 
  <h3>5. 进阶篇:更强大的解析工具</h3> 
  <p>虽然 BeautifulSoup 足以应付大部分新手场景,但当你遇到结构复杂、嵌套多、或需要批量高效提取时,下面这些工具会更适合。</p> 
  <h4>5.1 lxml (XPath)</h4> 
  <ul> 
   <li> <p><strong>特点</strong>:基于 C 语言实现,解析速度快,支持标准的 XPath 查询。</p> </li> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> lxml
</code></pre> </li> 
   <li> <p><strong>示例</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">from</span> lxml <span class="token keyword">import</span> etree

html <span class="token operator">=</span> <span class="token triple-quoted-string string">'''<html><body>
    <div class="post"><h2><a href="/p1">文章A</a></h2></div>
    <div class="post"><h2><a href="/p2">文章B</a></h2></div>
</body></html>'''</span>

<span class="token comment"># 1. 将文本转换为 Element 对象</span>
tree <span class="token operator">=</span> etree<span class="token punctuation">.</span>HTML<span class="token punctuation">(</span>html<span class="token punctuation">)</span>

<span class="token comment"># 2. 使用 XPath 语法提取所有链接文本和 href</span>
titles <span class="token operator">=</span> tree<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'//div[@class="post"]/h2/a/text()'</span><span class="token punctuation">)</span>
links <span class="token operator">=</span> tree<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'//div[@class="post"]/h2/a/@href'</span><span class="token punctuation">)</span>

<span class="token keyword">for</span> t<span class="token punctuation">,</span> l <span class="token keyword">in</span> <span class="token builtin">zip</span><span class="token punctuation">(</span>titles<span class="token punctuation">,</span> links<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span>t<span class="token punctuation">,</span> l<span class="token punctuation">)</span>
<span class="token comment"># 输出:</span>
<span class="token comment"># 文章A /p1</span>
<span class="token comment"># 文章B /p2</span>
</code></pre> </li> 
   <li> <p><strong>常见 XPath 语法</strong>:</p> 
    <ul> 
     <li><code>//tag[@attr="value"]</code>:查找所有符合条件的 tag。</li> 
     <li><code>text()</code>:获取文本节点;</li> 
     <li><code>@href</code>:获取属性值;</li> 
     <li><code>//div//a</code>:查找 div 下所有后代中的 a;</li> 
     <li><code>//ul/li[1]</code>:查找第一个 li;</li> 
     <li><code>contains(@class, "foo")</code>:class 中包含 foo 的元素。</li> 
    </ul> </li> 
  </ul> 
  <h4>5.2 parsel(Scrapy 内置的解析器)</h4> 
  <ul> 
   <li> <p><strong>特点</strong>:Scrapy 自带的一套基于 Css/XPath 的快速解析工具,接口与 lxml 类似,但更贴合 Scrapy 的数据提取习惯。</p> </li> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> parsel
</code></pre> </li> 
   <li> <p><strong>示例</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">from</span> parsel <span class="token keyword">import</span> Selector

html <span class="token operator">=</span> <span class="token triple-quoted-string string">'''<ul>
    <li class="item"><a href="/a1">Item1</a></li>
    <li class="item"><a href="/a2">Item2</a></li>
</ul>'''</span>

sel <span class="token operator">=</span> Selector<span class="token punctuation">(</span>text<span class="token operator">=</span>html<span class="token punctuation">)</span>
<span class="token comment"># 使用 CSS 选择器</span>
<span class="token keyword">for</span> item <span class="token keyword">in</span> sel<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'li.item'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    title <span class="token operator">=</span> item<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'a::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
    link <span class="token operator">=</span> item<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'a::attr(href)'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span>title<span class="token punctuation">,</span> link<span class="token punctuation">)</span>
<span class="token comment"># 使用 XPath</span>
<span class="token keyword">for</span> item <span class="token keyword">in</span> sel<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'//li[@class="item"]'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    title <span class="token operator">=</span> item<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'./a/text()'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
    link <span class="token operator">=</span> item<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'./a/@href'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span>title<span class="token punctuation">,</span> link<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><code>parsel.Selector</code> 对象在 Scrapy 中经常用到,直接拿过来在项目外部也能用。</p> </li> 
  </ul> 
  <h4>5.3 PyQuery(类似 jQuery 的解析方式)</h4> 
  <ul> 
   <li> <p><strong>特点</strong>:接口风格类似 jQuery,习惯了前端的同学会很快上手。</p> </li> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> pyquery
</code></pre> </li> 
   <li> <p><strong>示例</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">from</span> pyquery <span class="token keyword">import</span> PyQuery <span class="token keyword">as</span> pq

html <span class="token operator">=</span> <span class="token triple-quoted-string string">'''<div id="posts">
    <h2><a href="/x1">新闻X1</a></h2>
    <h2><a href="/x2">新闻X2</a></h2>
</div>'''</span>

doc <span class="token operator">=</span> pq<span class="token punctuation">(</span>html<span class="token punctuation">)</span>
<span class="token comment"># 通过标签/ID/css 选择器定位</span>
<span class="token keyword">for</span> item <span class="token keyword">in</span> doc<span class="token punctuation">(</span><span class="token string">'#posts h2'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token comment"># item 是 lxml 的 Element,需要再次包装</span>
    a <span class="token operator">=</span> pq<span class="token punctuation">(</span>item<span class="token punctuation">)</span><span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'a'</span><span class="token punctuation">)</span>
    title <span class="token operator">=</span> a<span class="token punctuation">.</span>text<span class="token punctuation">(</span><span class="token punctuation">)</span>
    url <span class="token operator">=</span> a<span class="token punctuation">.</span>attr<span class="token punctuation">(</span><span class="token string">'href'</span><span class="token punctuation">)</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span>title<span class="token punctuation">,</span> url<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p>PyQuery 内部使用 lxml 作为解析器,速度不逊于直接调用 lxml。</p> </li> 
  </ul> 
  <h4>5.4 正则表达式在爬虫中的应用</h4> 
  <ul> 
   <li> <p>正则并不是万能的 HTML 解析方案,但在提取简单规则(如邮箱、电话号码、特定模式字符串)时非常方便。</p> </li> 
   <li> <p>在爬虫中,可先用 BeautifulSoup/lxml 找到相应的大块内容,再对内容字符串用正则提取。</p> </li> 
   <li> <p><strong>示例</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> re
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

html <span class="token operator">=</span> <span class="token triple-quoted-string string">'''<div class="info">
    联系邮箱:abc@example.com
    联系电话:123-4567-890
</div>'''</span>

soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
info <span class="token operator">=</span> soup<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'div'</span><span class="token punctuation">,</span> class_<span class="token operator">=</span><span class="token string">'info'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>

<span class="token comment"># 匹配邮箱</span>
email_pattern <span class="token operator">=</span> <span class="token string">r'[\w\.-]+@[\w\.-]+'</span>
emails <span class="token operator">=</span> re<span class="token punctuation">.</span>findall<span class="token punctuation">(</span>email_pattern<span class="token punctuation">,</span> info<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'邮箱:'</span><span class="token punctuation">,</span> emails<span class="token punctuation">)</span>

<span class="token comment"># 匹配电话号码</span>
phone_pattern <span class="token operator">=</span> <span class="token string">r'\d{3}-\d{4}-\d{3,4}'</span>
phones <span class="token operator">=</span> re<span class="token punctuation">.</span>findall<span class="token punctuation">(</span>phone_pattern<span class="token punctuation">,</span> info<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'电话:'</span><span class="token punctuation">,</span> phones<span class="token punctuation">)</span>
</code></pre> </li> 
  </ul> 
  <hr> 
  <p></p> 
  <h3>6. 框架篇:Scrapy 全面入门</h3> 
  <p>如果你想快速搭建一个可维护、可扩展的爬虫项目,Scrapy 是 Python 爬虫生态中最成熟、最流行的爬虫框架之一。</p> 
  <h4>6.1 Scrapy 简介</h4> 
  <ul> 
   <li> <p><strong>Scrapy</strong>:一个专门为大规模网络爬取与信息提取设计的开源框架,具有高性能、多并发、支持分布式、内置各种中间件与管道。</p> </li> 
   <li> <p><strong>适用场景</strong>:</p> 
    <ul> 
     <li>大规模爬取同类型大量网页。</li> 
     <li>对页面进行复杂数据清洗、去重、存储。</li> 
     <li>需要高度定制化中间件或扩展时。</li> 
    </ul> </li> 
  </ul> 
  <h4>6.2 安装与项目结构</h4> 
  <ol> 
   <li> <p>安装 Scrapy:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> scrapy
</code></pre> </li> 
   <li> <p>创建 Scrapy 项目:</p> <pre><code class="prism language-bash">scrapy startproject myproject
</code></pre> </li> 
   <li> <p>项目目录结构(示例):</p> <pre><code>myproject/
    scrapy.cfg            # 部署时使用的配置文件
    myproject/            # 项目 Python 模块
        __init__.py
        items.py          # 定义数据模型(Item)
        middlewares.py    # 自定义中间件
        pipelines.py      # 数据处理与存储 Pipeline
        settings.py       # Scrapy 全局配置
        spiders/          # 各种爬虫文件放在这里
            __init__.py
            example_spider.py
</code></pre> </li> 
  </ol> 
  <h4>6.3 编写第一个 Scrapy 爬虫 Spider</h4> 
  <p>假设我们要爬去 <code>quotes.toscrape.com</code> 网站上所有名言及作者:</p> 
  <ol> 
   <li> <p>在 <code>myproject/spiders/</code> 下新建 <code>quotes_spider.py</code>:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> scrapy
<span class="token keyword">from</span> myproject<span class="token punctuation">.</span>items <span class="token keyword">import</span> MyprojectItem

<span class="token keyword">class</span> <span class="token class-name">QuotesSpider</span><span class="token punctuation">(</span>scrapy<span class="token punctuation">.</span>Spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
    name <span class="token operator">=</span> <span class="token string">'quotes'</span>  <span class="token comment"># 爬虫名,运行时指定</span>
    allowed_domains <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'quotes.toscrape.com'</span><span class="token punctuation">]</span>
    start_urls <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'https://quotes.toscrape.com/'</span><span class="token punctuation">]</span>

    <span class="token keyword">def</span> <span class="token function">parse</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> response<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token comment"># 提取每个名言块</span>
        <span class="token keyword">for</span> quote <span class="token keyword">in</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
            item <span class="token operator">=</span> MyprojectItem<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'text'</span><span class="token punctuation">]</span> <span class="token operator">=</span> quote<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'span.text::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'author'</span><span class="token punctuation">]</span> <span class="token operator">=</span> quote<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'small.author::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'tags'</span><span class="token punctuation">]</span> <span class="token operator">=</span> quote<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.tags a.tag::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>getall<span class="token punctuation">(</span><span class="token punctuation">)</span>
            <span class="token keyword">yield</span> item

        <span class="token comment"># 翻页:获取下一页链接并递归</span>
        next_page <span class="token operator">=</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'li.next a::attr(href)'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">if</span> next_page<span class="token punctuation">:</span>
            <span class="token keyword">yield</span> response<span class="token punctuation">.</span>follow<span class="token punctuation">(</span>next_page<span class="token punctuation">,</span> callback<span class="token operator">=</span>self<span class="token punctuation">.</span>parse<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p>定义 Item 模型 (<code>myproject/items.py</code>):</p> <pre><code class="prism language-python"><span class="token keyword">import</span> scrapy

<span class="token keyword">class</span> <span class="token class-name">MyprojectItem</span><span class="token punctuation">(</span>scrapy<span class="token punctuation">.</span>Item<span class="token punctuation">)</span><span class="token punctuation">:</span>
    text <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
    author <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
    tags <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p>配置数据存储 Pipeline(可选存储到 JSON/CSV/数据库),如在 <code>myproject/pipelines.py</code>:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> json

<span class="token keyword">class</span> <span class="token class-name">JsonWriterPipeline</span><span class="token punctuation">:</span>
    <span class="token keyword">def</span> <span class="token function">open_spider</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        self<span class="token punctuation">.</span><span class="token builtin">file</span> <span class="token operator">=</span> <span class="token builtin">open</span><span class="token punctuation">(</span><span class="token string">'quotes.json'</span><span class="token punctuation">,</span> <span class="token string">'w'</span><span class="token punctuation">,</span> encoding<span class="token operator">=</span><span class="token string">'utf-8'</span><span class="token punctuation">)</span>
        self<span class="token punctuation">.</span><span class="token builtin">file</span><span class="token punctuation">.</span>write<span class="token punctuation">(</span><span class="token string">'[\n'</span><span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">close_spider</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        self<span class="token punctuation">.</span><span class="token builtin">file</span><span class="token punctuation">.</span>write<span class="token punctuation">(</span><span class="token string">'\n]'</span><span class="token punctuation">)</span>
        self<span class="token punctuation">.</span><span class="token builtin">file</span><span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">process_item</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> item<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        line <span class="token operator">=</span> json<span class="token punctuation">.</span>dumps<span class="token punctuation">(</span><span class="token builtin">dict</span><span class="token punctuation">(</span>item<span class="token punctuation">)</span><span class="token punctuation">,</span> ensure_ascii<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span>
        self<span class="token punctuation">.</span><span class="token builtin">file</span><span class="token punctuation">.</span>write<span class="token punctuation">(</span>line <span class="token operator">+</span> <span class="token string">',\n'</span><span class="token punctuation">)</span>
        <span class="token keyword">return</span> item
</code></pre> <p>并在 <code>settings.py</code> 中启用:</p> <pre><code class="prism language-python">ITEM_PIPELINES <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'myproject.pipelines.JsonWriterPipeline'</span><span class="token punctuation">:</span> <span class="token number">300</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>
</code></pre> </li> 
   <li> <p>运行爬虫:</p> <pre><code class="prism language-bash">scrapy crawl quotes
</code></pre> <p>运行后,会在项目根目录生成 <code>quotes.json</code>,其中包含抓取到的所有名言数据。</p> </li> 
  </ol> 
  <h4>6.4 Item、Pipeline、Settings 详解</h4> 
  <ul> 
   <li><strong>Items (<code>items.py</code>)</strong>:定义要提取的数据结构与字段,相当于“数据模型”。</li> 
   <li><strong>Spiders (<code>spiders/xxx.py</code>)</strong>:每个 spider 文件对应一个任务,可接收 <code>start_urls</code>、<code>allowed_domains</code>、<code>parse()</code> 回调等。可自定义不同的回调函数来解析不同页面。</li> 
   <li><strong>Pipelines (<code>pipelines.py</code>)</strong>:处理从 Spider 返回的 Item,常见操作包括数据清洗(去重、格式化)、存储(写入 JSON/CSV、入库)、下载附件等。</li> 
   <li><strong>Settings (<code>settings.py</code>)</strong>:全局配置文件,包含并发数(<code>CONCURRENT_REQUESTS</code>)、下载延时(<code>DOWNLOAD_DELAY</code>)、中间件配置、管道配置、User-Agent 等。</li> 
  </ul> 
  <p>常见 Settings 配置示例:</p> 
  <pre><code class="prism language-python"><span class="token comment"># settings.py(只列部分)  </span>
BOT_NAME <span class="token operator">=</span> <span class="token string">'myproject'</span>

SPIDER_MODULES <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'myproject.spiders'</span><span class="token punctuation">]</span>
NEWSPIDER_MODULE <span class="token operator">=</span> <span class="token string">'myproject.spiders'</span>

<span class="token comment"># 遵循 robots 协议</span>
ROBOTSTXT_OBEY <span class="token operator">=</span> <span class="token boolean">True</span>

<span class="token comment"># 并发请求数(默认 16)</span>
CONCURRENT_REQUESTS <span class="token operator">=</span> <span class="token number">8</span>

<span class="token comment"># 下载延时(秒),防止对目标站造成过大压力</span>
DOWNLOAD_DELAY <span class="token operator">=</span> <span class="token number">1</span>

<span class="token comment"># 配置 User-Agent</span>
DEFAULT_REQUEST_HEADERS <span class="token operator">=</span> <span class="token punctuation">{</span>
   <span class="token string">'User-Agent'</span><span class="token punctuation">:</span> <span class="token string">'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ...'</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>

<span class="token comment"># 启用 Pipeline</span>
ITEM_PIPELINES <span class="token operator">=</span> <span class="token punctuation">{</span>
   <span class="token string">'myproject.pipelines.JsonWriterPipeline'</span><span class="token punctuation">:</span> <span class="token number">300</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>

<span class="token comment"># 启用或禁用中间件、扩展、管道等</span>
DOWNLOADER_MIDDLEWARES <span class="token operator">=</span> <span class="token punctuation">{</span>
   <span class="token comment"># 'myproject.middlewares.SomeDownloaderMiddleware': 543,</span>
<span class="token punctuation">}</span>

<span class="token comment"># 日志等级</span>
LOG_LEVEL <span class="token operator">=</span> <span class="token string">'INFO'</span>
</code></pre> 
  <h4>6.5 Scrapy Shell 在线调试</h4> 
  <ul> 
   <li> <p>Scrapy 提供了 <code>scrapy shell <URL></code> 命令,可以快速测试 XPath、CSS 选择器。</p> <pre><code class="prism language-bash">scrapy shell <span class="token string">'https://quotes.toscrape.com/'</span>
</code></pre> </li> 
   <li> <p>进入 shell 后,你可以执行:</p> <pre><code class="prism language-python"><span class="token operator">>></span><span class="token operator">></span> response<span class="token punctuation">.</span>status
<span class="token number">200</span>
<span class="token operator">>></span><span class="token operator">></span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.quote span.text::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>getall<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token punctuation">[</span><span class="token string">'“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'</span><span class="token punctuation">,</span> <span class="token punctuation">.</span><span class="token punctuation">.</span><span class="token punctuation">.</span><span class="token punctuation">]</span>
<span class="token operator">>></span><span class="token operator">></span> response<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'//div[@class="quote"]/span[@class="text"]/text()'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>getall<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p>Shell 模式下,你可以快速试错、验证提取逻辑,比写完整 Spider 再跑要高效很多。</p> </li> 
  </ul> 
  <h4>6.6 分布式与多线程:Scrapy 爬虫并发配置</h4> 
  <ul> 
   <li><strong>并发请求数</strong>:在 <code>settings.py</code> 中设置 <code>CONCURRENT_REQUESTS</code>(默认 16);</li> 
   <li><strong>单域名并发</strong>:<code>CONCURRENT_REQUESTS_PER_DOMAIN</code>(默认 8);</li> 
   <li><strong>单 IP 并发</strong>:<code>CONCURRENT_REQUESTS_PER_IP</code>;</li> 
   <li><strong>下载延时</strong>:<code>DOWNLOAD_DELAY</code>(默认 0);</li> 
   <li><strong>自动限速</strong>:<code>AUTOTHROTTLE_ENABLED = True</code>,配合 <code>AUTOTHROTTLE_START_DELAY</code>、<code>AUTOTHROTTLE_MAX_DELAY</code> 等。</li> 
   <li><strong>并行请求</strong>:Scrapy 内部使用 Twisted 异步网络库实现高并发,单机即可轻松处理成千上万请求。</li> 
  </ul> 
  <h4>6.7 Scrapy 中间件与扩展(Downloader Middleware、Downloader Handler)</h4> 
  <ul> 
   <li> <p><strong>Downloader Middleware</strong>:位于 Scrapy 引擎与下载器之间,可控制请求/响应,常用于:</p> 
    <ul> 
     <li>动态设置 User-Agent、Proxy;</li> 
     <li>拦截并修改请求/响应头;</li> 
     <li>处理重试(Retry)、重定向(Redirect)等。</li> 
    </ul> </li> 
   <li> <p><strong>示例:随机 User-Agent Middleware</strong></p> <pre><code class="prism language-python"><span class="token comment"># myproject/middlewares.py</span>

<span class="token keyword">import</span> random
<span class="token keyword">from</span> scrapy <span class="token keyword">import</span> signals

<span class="token keyword">class</span> <span class="token class-name">RandomUserAgentMiddleware</span><span class="token punctuation">:</span>
    <span class="token keyword">def</span> <span class="token function">__init__</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> user_agents<span class="token punctuation">)</span><span class="token punctuation">:</span>
        self<span class="token punctuation">.</span>user_agents <span class="token operator">=</span> user_agents

    <span class="token decorator annotation punctuation">@classmethod</span>
    <span class="token keyword">def</span> <span class="token function">from_crawler</span><span class="token punctuation">(</span>cls<span class="token punctuation">,</span> crawler<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token keyword">return</span> cls<span class="token punctuation">(</span>
            user_agents<span class="token operator">=</span>crawler<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'USER_AGENTS_LIST'</span><span class="token punctuation">)</span>
        <span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">process_request</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> request<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        ua <span class="token operator">=</span> random<span class="token punctuation">.</span>choice<span class="token punctuation">(</span>self<span class="token punctuation">.</span>user_agents<span class="token punctuation">)</span>
        request<span class="token punctuation">.</span>headers<span class="token punctuation">.</span>setdefault<span class="token punctuation">(</span><span class="token string">'User-Agent'</span><span class="token punctuation">,</span> ua<span class="token punctuation">)</span>
</code></pre> <p>并在 <code>settings.py</code> 中配置:</p> <pre><code class="prism language-python">USER_AGENTS_LIST <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token string">'Mozilla/5.0 ... Chrome/100.0 ...'</span><span class="token punctuation">,</span>
    <span class="token string">'Mozilla/5.0 ... Firefox/110.0 ...'</span><span class="token punctuation">,</span>
    <span class="token comment"># 更多 User-Agent</span>
<span class="token punctuation">]</span>

DOWNLOADER_MIDDLEWARES <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'myproject.middlewares.RandomUserAgentMiddleware'</span><span class="token punctuation">:</span> <span class="token number">400</span><span class="token punctuation">,</span>
    <span class="token string">'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware'</span><span class="token punctuation">:</span> <span class="token boolean">None</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>
</code></pre> </li> 
   <li> <p><strong>Downloader Handler</strong>:更底层的接口,一般不常用,Scrapy 已提供 <code>HttpDownloadHandler</code>、<code>S3DownloadHandler</code> 等。</p> </li> 
  </ul> 
  <hr> 
  <p></p> 
  <h3>7. 动态内容爬取:Selenium 与 Playwright</h3> 
  <p>当目标网页内容依赖 JavaScript 动态渲染时,单纯用 <code>requests</code> 或 Scrapy 获取到的 HTML 往往不包含最终可视化的数据。此时可以使用“浏览器自动化”工具,让其像真实浏览器一样加载页面,再提取渲染后的内容。</p> 
  <h4>7.1 为什么需要浏览器自动化?</h4> 
  <ul> 
   <li> <p>许多现代网站(尤其是单页应用 SPA)使用 React、Vue、Angular 等框架,通过 AJAX 或 API 获取数据并在前端渲染,直接请求 URL 只能拿到空白或框架代码。</p> </li> 
   <li> <p>浏览器自动化可以:</p> 
    <ol> 
     <li>启动一个真实或无头浏览器实例;</li> 
     <li>访问页面,等待 JavaScript 执行完成;</li> 
     <li>拿到渲染完毕的 DOM,然后再用解析库提取。</li> 
    </ol> </li> 
  </ul> 
  <h4>7.2 Selenium 基础用法</h4> 
  <ol> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> selenium
</code></pre> </li> 
   <li> <p><strong>下载 WebDriver</strong>(以 Chrome 为例):</p> 
    <ul> 
     <li>前往 ChromeDriver 下载页面 ,下载与本地 Chrome 版本相匹配的 <code>chromedriver</code>。</li> 
     <li>将 <code>chromedriver</code> 放置在系统 PATH 下,或在代码中指定路径。</li> 
    </ul> </li> 
   <li> <p><strong>示例:抓取动态网页内容</strong></p> <pre><code class="prism language-python"><span class="token keyword">from</span> selenium <span class="token keyword">import</span> webdriver
<span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>chrome<span class="token punctuation">.</span>service <span class="token keyword">import</span> Service <span class="token keyword">as</span> ChromeService
<span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>common<span class="token punctuation">.</span>by <span class="token keyword">import</span> By
<span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>chrome<span class="token punctuation">.</span>options <span class="token keyword">import</span> Options
<span class="token keyword">import</span> time

<span class="token comment"># 1. 配置 Chrome 选项</span>
chrome_options <span class="token operator">=</span> Options<span class="token punctuation">(</span><span class="token punctuation">)</span>
chrome_options<span class="token punctuation">.</span>add_argument<span class="token punctuation">(</span><span class="token string">'--headless'</span><span class="token punctuation">)</span>  <span class="token comment"># 无界面模式</span>
chrome_options<span class="token punctuation">.</span>add_argument<span class="token punctuation">(</span><span class="token string">'--no-sandbox'</span><span class="token punctuation">)</span>
chrome_options<span class="token punctuation">.</span>add_argument<span class="token punctuation">(</span><span class="token string">'--disable-gpu'</span><span class="token punctuation">)</span>

<span class="token comment"># 2. 指定 chromedriver 路径或直接放到 PATH 中</span>
service <span class="token operator">=</span> ChromeService<span class="token punctuation">(</span>executable_path<span class="token operator">=</span><span class="token string">'path/to/chromedriver'</span><span class="token punctuation">)</span>

<span class="token comment"># 3. 创建 WebDriver</span>
driver <span class="token operator">=</span> webdriver<span class="token punctuation">.</span>Chrome<span class="token punctuation">(</span>service<span class="token operator">=</span>service<span class="token punctuation">,</span> options<span class="token operator">=</span>chrome_options<span class="token punctuation">)</span>

<span class="token keyword">try</span><span class="token punctuation">:</span>
    <span class="token comment"># 4. 打开页面</span>
    driver<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://quotes.toscrape.com/js/'</span><span class="token punctuation">)</span>  <span class="token comment"># 这是一个 JavaScript 渲染的示例</span>

    <span class="token comment"># 5. 等待 JS 渲染,最简单的方式:time.sleep(建议改用显式/隐式等待)</span>
    time<span class="token punctuation">.</span>sleep<span class="token punctuation">(</span><span class="token number">2</span><span class="token punctuation">)</span>

    <span class="token comment"># 6. 提取渲染后的 HTML</span>
    html <span class="token operator">=</span> driver<span class="token punctuation">.</span>page_source

    <span class="token comment"># 7. 交给 BeautifulSoup 或 lxml 解析</span>
    <span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup
    soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
    <span class="token keyword">for</span> quote <span class="token keyword">in</span> soup<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
        text <span class="token operator">=</span> quote<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'span'</span><span class="token punctuation">,</span> class_<span class="token operator">=</span><span class="token string">'text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
        author <span class="token operator">=</span> quote<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'small'</span><span class="token punctuation">,</span> class_<span class="token operator">=</span><span class="token string">'author'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span>text<span class="token punctuation">,</span> author<span class="token punctuation">)</span>
<span class="token keyword">finally</span><span class="token punctuation">:</span>
    driver<span class="token punctuation">.</span>quit<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>显式等待与隐式等待</strong></p> 
    <ul> 
     <li> <p><strong>隐式等待</strong>:<code>driver.implicitly_wait(10)</code>,在寻找元素时最长等待 10 秒;</p> </li> 
     <li> <p><strong>显式等待</strong>:使用 <code>WebDriverWait</code> 与 <code>ExpectedConditions</code>,例如:</p> <pre><code class="prism language-python"><span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>support<span class="token punctuation">.</span>ui <span class="token keyword">import</span> WebDriverWait
<span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>support <span class="token keyword">import</span> expected_conditions <span class="token keyword">as</span> EC

element <span class="token operator">=</span> WebDriverWait<span class="token punctuation">(</span>driver<span class="token punctuation">,</span> <span class="token number">10</span><span class="token punctuation">)</span><span class="token punctuation">.</span>until<span class="token punctuation">(</span>
    EC<span class="token punctuation">.</span>presence_of_element_located<span class="token punctuation">(</span><span class="token punctuation">(</span>By<span class="token punctuation">.</span>CSS_SELECTOR<span class="token punctuation">,</span> <span class="token string">'div.quote'</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token punctuation">)</span>
</code></pre> </li> 
    </ul> </li> 
  </ol> 
  <h4>7.3 Playwright for Python(更快更轻量)</h4> 
  <ul> 
   <li> <p><strong>Playwright</strong>:由微软维护、继承自 Puppeteer 的跨浏览器自动化库,支持 Chromium、Firefox、WebKit,无需单独下载 WebDriver。</p> </li> 
   <li> <p><strong>优点</strong>:启动速度快、API 简洁、并发控制更灵活。</p> </li> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> playwright
<span class="token comment"># 安装浏览器内核(只需第一次执行)</span>
playwright <span class="token function">install</span>
</code></pre> </li> 
   <li> <p><strong>示例:抓取动态内容</strong></p> <pre><code class="prism language-python"><span class="token keyword">import</span> asyncio
<span class="token keyword">from</span> playwright<span class="token punctuation">.</span>async_api <span class="token keyword">import</span> async_playwright
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">async</span> <span class="token keyword">with</span> async_playwright<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">as</span> p<span class="token punctuation">:</span>
        browser <span class="token operator">=</span> <span class="token keyword">await</span> p<span class="token punctuation">.</span>chromium<span class="token punctuation">.</span>launch<span class="token punctuation">(</span>headless<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
        page <span class="token operator">=</span> <span class="token keyword">await</span> browser<span class="token punctuation">.</span>new_page<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">await</span> page<span class="token punctuation">.</span>goto<span class="token punctuation">(</span><span class="token string">'https://quotes.toscrape.com/js/'</span><span class="token punctuation">)</span>
        <span class="token comment"># 可选:等待某个元素加载完成</span>
        <span class="token keyword">await</span> page<span class="token punctuation">.</span>wait_for_selector<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span>
        content <span class="token operator">=</span> <span class="token keyword">await</span> page<span class="token punctuation">.</span>content<span class="token punctuation">(</span><span class="token punctuation">)</span>  <span class="token comment"># 获取渲染后的 HTML</span>
        <span class="token keyword">await</span> browser<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>

        <span class="token comment"># 交给 BeautifulSoup 解析</span>
        soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>content<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
        <span class="token keyword">for</span> quote <span class="token keyword">in</span> soup<span class="token punctuation">.</span>select<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
            text <span class="token operator">=</span> quote<span class="token punctuation">.</span>select_one<span class="token punctuation">(</span><span class="token string">'span.text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
            author <span class="token operator">=</span> quote<span class="token punctuation">.</span>select_one<span class="token punctuation">(</span><span class="token string">'small.author'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
            <span class="token keyword">print</span><span class="token punctuation">(</span>text<span class="token punctuation">,</span> author<span class="token punctuation">)</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    asyncio<span class="token punctuation">.</span>run<span class="token punctuation">(</span>main<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>同步版 Playwright</strong><br> 如果你不想使用异步,也可以借助 <code>sync_api</code>:</p> <pre><code class="prism language-python"><span class="token keyword">from</span> playwright<span class="token punctuation">.</span>sync_api <span class="token keyword">import</span> sync_playwright
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

<span class="token keyword">def</span> <span class="token function">main</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">with</span> sync_playwright<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">as</span> p<span class="token punctuation">:</span>
        browser <span class="token operator">=</span> p<span class="token punctuation">.</span>chromium<span class="token punctuation">.</span>launch<span class="token punctuation">(</span>headless<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
        page <span class="token operator">=</span> browser<span class="token punctuation">.</span>new_page<span class="token punctuation">(</span><span class="token punctuation">)</span>
        page<span class="token punctuation">.</span>goto<span class="token punctuation">(</span><span class="token string">'https://quotes.toscrape.com/js/'</span><span class="token punctuation">)</span>
        page<span class="token punctuation">.</span>wait_for_selector<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span>
        html <span class="token operator">=</span> page<span class="token punctuation">.</span>content<span class="token punctuation">(</span><span class="token punctuation">)</span>
        browser<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>

    soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
    <span class="token keyword">for</span> quote <span class="token keyword">in</span> soup<span class="token punctuation">.</span>select<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
        text <span class="token operator">=</span> quote<span class="token punctuation">.</span>select_one<span class="token punctuation">(</span><span class="token string">'span.text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
        author <span class="token operator">=</span> quote<span class="token punctuation">.</span>select_one<span class="token punctuation">(</span><span class="token string">'small.author'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span>text<span class="token punctuation">,</span> author<span class="token punctuation">)</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    main<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
  </ul> 
  <h4>7.4 无头浏览器(headless)模式及性能优化</h4> 
  <ul> 
   <li> <p><strong>无头模式</strong>:在 Linux 服务器等环境下,没有图形界面,需要 <code>--headless</code> 参数;在 macOS/Windows 上也可加速启动。</p> </li> 
   <li> <p><strong>资源限制</strong>:可以通过设置启动参数降低资源占用,如:</p> 
    <ul> 
     <li>Chrome:<code>chrome_options.add_argument('--disable-gpu')</code>、<code>--no-sandbox</code>、<code>--disable-dev-shm-usage</code>;</li> 
     <li>Playwright:<code>browser = await p.chromium.launch(headless=True, args=['--disable-gpu', '--no-sandbox'])</code>。</li> 
    </ul> </li> 
   <li> <p><strong>避免过度渲染</strong>:如果只想拿纯数据,尽量通过分析接口(XHR 请求)直接调用后台 API,不必启动完整浏览器。</p> </li> 
  </ul> 
  <h4>7.5 结合 Selenium/Playwright 与 BeautifulSoup 解析</h4> 
  <p>一般流程:</p> 
  <ol> 
   <li>用 Selenium/Playwright 拿到渲染后的 <code>page_source</code> 或 <code>content()</code>;</li> 
   <li>用 BeautifulSoup/lxml 对 HTML 进行二次解析与提取。</li> 
  </ol> 
  <p>示例综合:</p> 
  <pre><code class="prism language-python"><span class="token keyword">from</span> selenium <span class="token keyword">import</span> webdriver
<span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>chrome<span class="token punctuation">.</span>service <span class="token keyword">import</span> Service <span class="token keyword">as</span> ChromeService
<span class="token keyword">from</span> selenium<span class="token punctuation">.</span>webdriver<span class="token punctuation">.</span>chrome<span class="token punctuation">.</span>options <span class="token keyword">import</span> Options
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

chrome_options <span class="token operator">=</span> Options<span class="token punctuation">(</span><span class="token punctuation">)</span>
chrome_options<span class="token punctuation">.</span>add_argument<span class="token punctuation">(</span><span class="token string">'--headless'</span><span class="token punctuation">)</span>
service <span class="token operator">=</span> ChromeService<span class="token punctuation">(</span><span class="token string">'path/to/chromedriver'</span><span class="token punctuation">)</span>
driver <span class="token operator">=</span> webdriver<span class="token punctuation">.</span>Chrome<span class="token punctuation">(</span>service<span class="token operator">=</span>service<span class="token punctuation">,</span> options<span class="token operator">=</span>chrome_options<span class="token punctuation">)</span>

<span class="token keyword">try</span><span class="token punctuation">:</span>
    driver<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://example.com/dynamic-page'</span><span class="token punctuation">)</span>
    driver<span class="token punctuation">.</span>implicitly_wait<span class="token punctuation">(</span><span class="token number">5</span><span class="token punctuation">)</span>
    html <span class="token operator">=</span> driver<span class="token punctuation">.</span>page_source
    soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
    <span class="token comment"># 根据解析需求提取数据</span>
    <span class="token keyword">for</span> item <span class="token keyword">in</span> soup<span class="token punctuation">.</span>select<span class="token punctuation">(</span><span class="token string">'div.article'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
        title <span class="token operator">=</span> item<span class="token punctuation">.</span>select_one<span class="token punctuation">(</span><span class="token string">'h1'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span><span class="token punctuation">)</span>
        content <span class="token operator">=</span> item<span class="token punctuation">.</span>select_one<span class="token punctuation">(</span><span class="token string">'div.content'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span>strip<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span>title<span class="token punctuation">,</span> content<span class="token punctuation">)</span>
<span class="token keyword">finally</span><span class="token punctuation">:</span>
    driver<span class="token punctuation">.</span>quit<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> 
  <hr> 
  <p></p> 
  <h3>8. 异步爬虫:aiohttp + asyncio 与 HTTPX</h3> 
  <p>当面对上千个、甚至上万个链接需要同时抓取时,同步阻塞式的 <code>requests</code> 就显得效率低下。Python 原生的 <code>asyncio</code> 协程、<code>aiohttp</code> 库或 <code>httpx</code> 异步模式可以极大提升并发性能。</p> 
  <h4>8.1 同步 vs 异步:性能原理简述</h4> 
  <ul> 
   <li><strong>同步(Blocking)</strong>:一次请求完毕后才开始下一次请求。</li> 
   <li><strong>异步(Non-Blocking)</strong>:发出请求后可立即切换到其他任务,网络 I/O 等待期间不阻塞线程。</li> 
   <li>对于 I/O 密集型爬虫,异步能显著提高吞吐量。</li> 
  </ul> 
  <h4>8.2 aiohttp 入门示例</h4> 
  <ol> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> aiohttp
</code></pre> </li> 
   <li> <p><strong>使用 asyncio + aiohttp 并发抓取</strong></p> <pre><code class="prism language-python"><span class="token keyword">import</span> asyncio
<span class="token keyword">import</span> aiohttp
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">fetch</span><span class="token punctuation">(</span>session<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">try</span><span class="token punctuation">:</span>
        <span class="token keyword">async</span> <span class="token keyword">with</span> session<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> timeout<span class="token operator">=</span><span class="token number">10</span><span class="token punctuation">)</span> <span class="token keyword">as</span> response<span class="token punctuation">:</span>
            text <span class="token operator">=</span> <span class="token keyword">await</span> response<span class="token punctuation">.</span>text<span class="token punctuation">(</span><span class="token punctuation">)</span>
            <span class="token keyword">return</span> text
    <span class="token keyword">except</span> Exception <span class="token keyword">as</span> e<span class="token punctuation">:</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'抓取 </span><span class="token interpolation"><span class="token punctuation">{</span>url<span class="token punctuation">}</span></span><span class="token string"> 失败:</span><span class="token interpolation"><span class="token punctuation">{</span>e<span class="token punctuation">}</span></span><span class="token string">'</span></span><span class="token punctuation">)</span>
        <span class="token keyword">return</span> <span class="token boolean">None</span>

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">parse</span><span class="token punctuation">(</span>html<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">if</span> <span class="token keyword">not</span> html<span class="token punctuation">:</span>
        <span class="token keyword">return</span>
    soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
    title <span class="token operator">=</span> soup<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'title'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span>strip<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span> <span class="token keyword">if</span> soup<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'title'</span><span class="token punctuation">)</span> <span class="token keyword">else</span> <span class="token string">'N/A'</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'URL: </span><span class="token interpolation"><span class="token punctuation">{</span>url<span class="token punctuation">}</span></span><span class="token string">,Title: </span><span class="token interpolation"><span class="token punctuation">{</span>title<span class="token punctuation">}</span></span><span class="token string">'</span></span><span class="token punctuation">)</span>

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">main</span><span class="token punctuation">(</span>urls<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token comment"># connector 限制最大并发数,防止打开过多 TCP 连接</span>
    conn <span class="token operator">=</span> aiohttp<span class="token punctuation">.</span>TCPConnector<span class="token punctuation">(</span>limit<span class="token operator">=</span><span class="token number">50</span><span class="token punctuation">)</span>
    <span class="token keyword">async</span> <span class="token keyword">with</span> aiohttp<span class="token punctuation">.</span>ClientSession<span class="token punctuation">(</span>connector<span class="token operator">=</span>conn<span class="token punctuation">)</span> <span class="token keyword">as</span> session<span class="token punctuation">:</span>
        tasks <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">]</span>
        <span class="token keyword">for</span> url <span class="token keyword">in</span> urls<span class="token punctuation">:</span>
            task <span class="token operator">=</span> asyncio<span class="token punctuation">.</span>create_task<span class="token punctuation">(</span>fetch<span class="token punctuation">(</span>session<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">)</span>
            tasks<span class="token punctuation">.</span>append<span class="token punctuation">(</span>task<span class="token punctuation">)</span>
        <span class="token comment"># gather 等待所有 fetch 完成</span>
        htmls <span class="token operator">=</span> <span class="token keyword">await</span> asyncio<span class="token punctuation">.</span>gather<span class="token punctuation">(</span><span class="token operator">*</span>tasks<span class="token punctuation">)</span>
        <span class="token comment"># 逐一解析</span>
        <span class="token keyword">for</span> html<span class="token punctuation">,</span> url <span class="token keyword">in</span> <span class="token builtin">zip</span><span class="token punctuation">(</span>htmls<span class="token punctuation">,</span> urls<span class="token punctuation">)</span><span class="token punctuation">:</span>
            <span class="token keyword">await</span> parse<span class="token punctuation">(</span>html<span class="token punctuation">,</span> url<span class="token punctuation">)</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    urls <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string-interpolation"><span class="token string">f'https://example.com/page/</span><span class="token interpolation"><span class="token punctuation">{</span>i<span class="token punctuation">}</span></span><span class="token string">'</span></span> <span class="token keyword">for</span> i <span class="token keyword">in</span> <span class="token builtin">range</span><span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> <span class="token number">101</span><span class="token punctuation">)</span><span class="token punctuation">]</span>
    asyncio<span class="token punctuation">.</span>run<span class="token punctuation">(</span>main<span class="token punctuation">(</span>urls<span class="token punctuation">)</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>说明</strong>:</p> 
    <ul> 
     <li><code>aiohttp.TCPConnector(limit=50)</code> 将并发连接限制在 50,避免短时间打开过多连接被服务器封。</li> 
     <li><code>asyncio.create_task</code> 创建并发 Task,交由事件循环调度。</li> 
     <li><code>await asyncio.gather(*)</code> 等待所有任务完成。</li> 
    </ul> </li> 
  </ol> 
  <h4>8.3 使用 asyncio 协程池提高并发</h4> 
  <p>如果需要对抓取和解析做更精细的并行控制,可使用 <code>asyncio.Semaphore</code> 或第三方协程池库(如 aiomultiprocess、aiojobs)来控制并发数。</p> 
  <pre><code class="prism language-python"><span class="token keyword">import</span> asyncio
<span class="token keyword">import</span> aiohttp
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

semaphore <span class="token operator">=</span> asyncio<span class="token punctuation">.</span>Semaphore<span class="token punctuation">(</span><span class="token number">20</span><span class="token punctuation">)</span>  <span class="token comment"># 最多同时跑 20 个协程</span>

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">fetch_with_sem</span><span class="token punctuation">(</span>session<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">async</span> <span class="token keyword">with</span> semaphore<span class="token punctuation">:</span>
        <span class="token keyword">try</span><span class="token punctuation">:</span>
            <span class="token keyword">async</span> <span class="token keyword">with</span> session<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> timeout<span class="token operator">=</span><span class="token number">10</span><span class="token punctuation">)</span> <span class="token keyword">as</span> resp<span class="token punctuation">:</span>
                <span class="token keyword">return</span> <span class="token keyword">await</span> resp<span class="token punctuation">.</span>text<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">except</span> Exception <span class="token keyword">as</span> e<span class="token punctuation">:</span>
            <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'Error fetching </span><span class="token interpolation"><span class="token punctuation">{</span>url<span class="token punctuation">}</span></span><span class="token string">: </span><span class="token interpolation"><span class="token punctuation">{</span>e<span class="token punctuation">}</span></span><span class="token string">'</span></span><span class="token punctuation">)</span>
            <span class="token keyword">return</span> <span class="token boolean">None</span>

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">main</span><span class="token punctuation">(</span>urls<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">async</span> <span class="token keyword">with</span> aiohttp<span class="token punctuation">.</span>ClientSession<span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token keyword">as</span> session<span class="token punctuation">:</span>
        tasks <span class="token operator">=</span> <span class="token punctuation">[</span>asyncio<span class="token punctuation">.</span>create_task<span class="token punctuation">(</span>fetch_with_sem<span class="token punctuation">(</span>session<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">for</span> url <span class="token keyword">in</span> urls<span class="token punctuation">]</span>
        results <span class="token operator">=</span> <span class="token keyword">await</span> asyncio<span class="token punctuation">.</span>gather<span class="token punctuation">(</span><span class="token operator">*</span>tasks<span class="token punctuation">)</span>
        <span class="token keyword">for</span> html<span class="token punctuation">,</span> url <span class="token keyword">in</span> <span class="token builtin">zip</span><span class="token punctuation">(</span>results<span class="token punctuation">,</span> urls<span class="token punctuation">)</span><span class="token punctuation">:</span>
            <span class="token keyword">if</span> html<span class="token punctuation">:</span>
                title <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'title'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span>strip<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
                <span class="token keyword">print</span><span class="token punctuation">(</span>url<span class="token punctuation">,</span> title<span class="token punctuation">)</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    sample_urls <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string-interpolation"><span class="token string">f'https://example.com/page/</span><span class="token interpolation"><span class="token punctuation">{</span>i<span class="token punctuation">}</span></span><span class="token string">'</span></span> <span class="token keyword">for</span> i <span class="token keyword">in</span> <span class="token builtin">range</span><span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> <span class="token number">51</span><span class="token punctuation">)</span><span class="token punctuation">]</span>
    asyncio<span class="token punctuation">.</span>run<span class="token punctuation">(</span>main<span class="token punctuation">(</span>sample_urls<span class="token punctuation">)</span><span class="token punctuation">)</span>
</code></pre> 
  <h4>8.4 HTTPX:Requests 的异步升级版</h4> 
  <ul> 
   <li> <p><strong>HTTPX</strong>:由 Encode 团队开发,与 <code>requests</code> API 十分相似,同时支持同步与异步模式。</p> </li> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> httpx
</code></pre> </li> 
   <li> <p><strong>示例</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> asyncio
<span class="token keyword">import</span> httpx
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">fetch</span><span class="token punctuation">(</span>client<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">try</span><span class="token punctuation">:</span>
        resp <span class="token operator">=</span> <span class="token keyword">await</span> client<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> timeout<span class="token operator">=</span><span class="token number">10.0</span><span class="token punctuation">)</span>
        resp<span class="token punctuation">.</span>raise_for_status<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">return</span> resp<span class="token punctuation">.</span>text
    <span class="token keyword">except</span> Exception <span class="token keyword">as</span> e<span class="token punctuation">:</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'Error </span><span class="token interpolation"><span class="token punctuation">{</span>url<span class="token punctuation">}</span></span><span class="token string">: </span><span class="token interpolation"><span class="token punctuation">{</span>e<span class="token punctuation">}</span></span><span class="token string">'</span></span><span class="token punctuation">)</span>
        <span class="token keyword">return</span> <span class="token boolean">None</span>

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">main</span><span class="token punctuation">(</span>urls<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">async</span> <span class="token keyword">with</span> httpx<span class="token punctuation">.</span>AsyncClient<span class="token punctuation">(</span>limits<span class="token operator">=</span>httpx<span class="token punctuation">.</span>Limits<span class="token punctuation">(</span>max_connections<span class="token operator">=</span><span class="token number">50</span><span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">as</span> client<span class="token punctuation">:</span>
        tasks <span class="token operator">=</span> <span class="token punctuation">[</span>asyncio<span class="token punctuation">.</span>create_task<span class="token punctuation">(</span>fetch<span class="token punctuation">(</span>client<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">for</span> url <span class="token keyword">in</span> urls<span class="token punctuation">]</span>
        <span class="token keyword">for</span> coro <span class="token keyword">in</span> asyncio<span class="token punctuation">.</span>as_completed<span class="token punctuation">(</span>tasks<span class="token punctuation">)</span><span class="token punctuation">:</span>
            html <span class="token operator">=</span> <span class="token keyword">await</span> coro
            <span class="token keyword">if</span> html<span class="token punctuation">:</span>
                title <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>html<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'title'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get_text<span class="token punctuation">(</span>strip<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
                <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'Title:'</span><span class="token punctuation">,</span> title<span class="token punctuation">)</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    urls <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string-interpolation"><span class="token string">f'https://example.com/page/</span><span class="token interpolation"><span class="token punctuation">{</span>i<span class="token punctuation">}</span></span><span class="token string">'</span></span> <span class="token keyword">for</span> i <span class="token keyword">in</span> <span class="token builtin">range</span><span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> <span class="token number">101</span><span class="token punctuation">)</span><span class="token punctuation">]</span>
    asyncio<span class="token punctuation">.</span>run<span class="token punctuation">(</span>main<span class="token punctuation">(</span>urls<span class="token punctuation">)</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p>与 <code>requests</code> 兼容的 API(如 <code>.get()</code>、<code>.post()</code>、<code>.json()</code>、<code>.text</code> 等),极大降低了上手门槛。</p> </li> 
  </ul> 
  <h4>8.5 异步下使用解析库示例(aiohttp + lxml)</h4> 
  <pre><code class="prism language-python"><span class="token keyword">import</span> asyncio
<span class="token keyword">import</span> aiohttp
<span class="token keyword">from</span> lxml <span class="token keyword">import</span> etree

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">fetch_and_parse</span><span class="token punctuation">(</span>session<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">try</span><span class="token punctuation">:</span>
        <span class="token keyword">async</span> <span class="token keyword">with</span> session<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> timeout<span class="token operator">=</span><span class="token number">10</span><span class="token punctuation">)</span> <span class="token keyword">as</span> resp<span class="token punctuation">:</span>
            text <span class="token operator">=</span> <span class="token keyword">await</span> resp<span class="token punctuation">.</span>text<span class="token punctuation">(</span><span class="token punctuation">)</span>
            tree <span class="token operator">=</span> etree<span class="token punctuation">.</span>HTML<span class="token punctuation">(</span>text<span class="token punctuation">)</span>
            <span class="token comment"># 提取第一条消息</span>
            msg <span class="token operator">=</span> tree<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'//div[@class="msg"]/text()'</span><span class="token punctuation">)</span>
            <span class="token keyword">print</span><span class="token punctuation">(</span>url<span class="token punctuation">,</span> msg<span class="token punctuation">)</span>
    <span class="token keyword">except</span> Exception <span class="token keyword">as</span> e<span class="token punctuation">:</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'Error fetching </span><span class="token interpolation"><span class="token punctuation">{</span>url<span class="token punctuation">}</span></span><span class="token string">: </span><span class="token interpolation"><span class="token punctuation">{</span>e<span class="token punctuation">}</span></span><span class="token string">'</span></span><span class="token punctuation">)</span>

<span class="token keyword">async</span> <span class="token keyword">def</span> <span class="token function">main</span><span class="token punctuation">(</span>urls<span class="token punctuation">)</span><span class="token punctuation">:</span>
    conn <span class="token operator">=</span> aiohttp<span class="token punctuation">.</span>TCPConnector<span class="token punctuation">(</span>limit<span class="token operator">=</span><span class="token number">30</span><span class="token punctuation">)</span>
    <span class="token keyword">async</span> <span class="token keyword">with</span> aiohttp<span class="token punctuation">.</span>ClientSession<span class="token punctuation">(</span>connector<span class="token operator">=</span>conn<span class="token punctuation">)</span> <span class="token keyword">as</span> session<span class="token punctuation">:</span>
        tasks <span class="token operator">=</span> <span class="token punctuation">[</span>fetch_and_parse<span class="token punctuation">(</span>session<span class="token punctuation">,</span> url<span class="token punctuation">)</span> <span class="token keyword">for</span> url <span class="token keyword">in</span> urls<span class="token punctuation">]</span>
        <span class="token keyword">await</span> asyncio<span class="token punctuation">.</span>gather<span class="token punctuation">(</span><span class="token operator">*</span>tasks<span class="token punctuation">)</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    url_list <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string-interpolation"><span class="token string">f'https://example.com/messages/</span><span class="token interpolation"><span class="token punctuation">{</span>i<span class="token punctuation">}</span></span><span class="token string">'</span></span> <span class="token keyword">for</span> i <span class="token keyword">in</span> <span class="token builtin">range</span><span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> <span class="token number">51</span><span class="token punctuation">)</span><span class="token punctuation">]</span>
    asyncio<span class="token punctuation">.</span>run<span class="token punctuation">(</span>main<span class="token punctuation">(</span>url_list<span class="token punctuation">)</span><span class="token punctuation">)</span>
</code></pre> 
  <hr> 
  <p></p> 
  <h3>9. 数据存储与去重</h3> 
  <p>爬虫的最终目的是获取并存储有价值的数据,因此选择合适的存储方式与去重机制至关重要。</p> 
  <h4>9.1 本地文件:CSV、JSON、SQLite</h4> 
  <ol> 
   <li> <p><strong>CSV/JSON</strong>:</p> 
    <ul> 
     <li>适合一次性、容量较小、对数据结构要求不高的场景。</li> 
     <li>直接用 Python 标准库即可读写。</li> 
    </ul> </li> 
   <li> <p><strong>SQLite</strong>:</p> 
    <ul> 
     <li> <p>轻量级嵌入式数据库,无需额外部署数据库服务器。</p> </li> 
     <li> <p>适合中小规模项目,比如几万条数据。</p> </li> 
     <li> <p>示例:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> sqlite3

conn <span class="token operator">=</span> sqlite3<span class="token punctuation">.</span>connect<span class="token punctuation">(</span><span class="token string">'data.db'</span><span class="token punctuation">)</span>
cursor <span class="token operator">=</span> conn<span class="token punctuation">.</span>cursor<span class="token punctuation">(</span><span class="token punctuation">)</span>
cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span><span class="token string">'CREATE TABLE IF NOT EXISTS items (id INTEGER PRIMARY KEY, title TEXT, url TEXT UNIQUE)'</span><span class="token punctuation">)</span>
data <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">(</span><span class="token string">'标题1'</span><span class="token punctuation">,</span> <span class="token string">'https://a.com/1'</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token punctuation">(</span><span class="token string">'标题2'</span><span class="token punctuation">,</span> <span class="token string">'https://a.com/2'</span><span class="token punctuation">)</span><span class="token punctuation">]</span>
<span class="token keyword">for</span> title<span class="token punctuation">,</span> url <span class="token keyword">in</span> data<span class="token punctuation">:</span>
    <span class="token keyword">try</span><span class="token punctuation">:</span>
        cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span><span class="token string">'INSERT INTO items (title, url) VALUES (?, ?)'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span>title<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">)</span>
    <span class="token keyword">except</span> sqlite3<span class="token punctuation">.</span>IntegrityError<span class="token punctuation">:</span>
        <span class="token keyword">pass</span>  <span class="token comment"># 去重</span>
conn<span class="token punctuation">.</span>commit<span class="token punctuation">(</span><span class="token punctuation">)</span>
conn<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
    </ul> </li> 
  </ol> 
  <h4>9.2 MySQL/PostgreSQL 等关系型数据库</h4> 
  <ul> 
   <li> <p><strong>优点</strong>:适合大规模数据存储,支持 SQL 强大的查询功能,能更好地做数据分析、统计。</p> </li> 
   <li> <p><strong>安装</strong>:先安装对应数据库服务器(MySQL、MariaDB、PostgreSQL),然后在 Python 中安装驱动:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> pymysql  <span class="token comment"># MySQL</span>
pip <span class="token function">install</span> psycopg2 <span class="token comment"># PostgreSQL</span>
</code></pre> </li> 
   <li> <p><strong>示例(MySQL)</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> pymysql

conn <span class="token operator">=</span> pymysql<span class="token punctuation">.</span>connect<span class="token punctuation">(</span>host<span class="token operator">=</span><span class="token string">'localhost'</span><span class="token punctuation">,</span> user<span class="token operator">=</span><span class="token string">'root'</span><span class="token punctuation">,</span> password<span class="token operator">=</span><span class="token string">'root'</span><span class="token punctuation">,</span> db<span class="token operator">=</span><span class="token string">'spider_db'</span><span class="token punctuation">,</span> charset<span class="token operator">=</span><span class="token string">'utf8mb4'</span><span class="token punctuation">)</span>
cursor <span class="token operator">=</span> conn<span class="token punctuation">.</span>cursor<span class="token punctuation">(</span><span class="token punctuation">)</span>
cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span><span class="token triple-quoted-string string">'''
    CREATE TABLE IF NOT EXISTS articles (
        id INT AUTO_INCREMENT PRIMARY KEY,
        title VARCHAR(255),
        url VARCHAR(255) UNIQUE
    ) CHARACTER SET utf8mb4;
'''</span><span class="token punctuation">)</span>
data <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">(</span><span class="token string">'标题1'</span><span class="token punctuation">,</span> <span class="token string">'https://a.com/1'</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token punctuation">(</span><span class="token string">'标题2'</span><span class="token punctuation">,</span> <span class="token string">'https://a.com/2'</span><span class="token punctuation">)</span><span class="token punctuation">]</span>
<span class="token keyword">for</span> title<span class="token punctuation">,</span> url <span class="token keyword">in</span> data<span class="token punctuation">:</span>
    <span class="token keyword">try</span><span class="token punctuation">:</span>
        cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span><span class="token string">'INSERT INTO articles (title, url) VALUES (%s, %s)'</span><span class="token punctuation">,</span> <span class="token punctuation">(</span>title<span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">)</span>
    <span class="token keyword">except</span> pymysql<span class="token punctuation">.</span>err<span class="token punctuation">.</span>IntegrityError<span class="token punctuation">:</span>
        <span class="token keyword">pass</span>
conn<span class="token punctuation">.</span>commit<span class="token punctuation">(</span><span class="token punctuation">)</span>
conn<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
  </ul> 
  <h4>9.3 MongoDB 等 NoSQL 存储</h4> 
  <ul> 
   <li> <p><strong>优点</strong>:文档型数据库,对半结构化 JSON 数据支持友好,可灵活存储字段不同的条目。</p> </li> 
   <li> <p><strong>安装与驱动</strong>:</p> 
    <ul> 
     <li>本地安装 MongoDB 或使用云服务;</li> 
     <li>Python 驱动:<code>pip install pymongo</code>。</li> 
    </ul> </li> 
   <li> <p><strong>示例</strong>:</p> <pre><code class="prism language-python"><span class="token keyword">from</span> pymongo <span class="token keyword">import</span> MongoClient

client <span class="token operator">=</span> MongoClient<span class="token punctuation">(</span><span class="token string">'mongodb://localhost:27017/'</span><span class="token punctuation">)</span>
db <span class="token operator">=</span> client<span class="token punctuation">[</span><span class="token string">'spider_db'</span><span class="token punctuation">]</span>
collection <span class="token operator">=</span> db<span class="token punctuation">[</span><span class="token string">'articles'</span><span class="token punctuation">]</span>
<span class="token comment"># 插入或更新(去重依据:url)</span>
data <span class="token operator">=</span> <span class="token punctuation">{</span><span class="token string">'title'</span><span class="token punctuation">:</span> <span class="token string">'标题1'</span><span class="token punctuation">,</span> <span class="token string">'url'</span><span class="token punctuation">:</span> <span class="token string">'https://a.com/1'</span><span class="token punctuation">,</span> <span class="token string">'tags'</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token string">'新闻'</span><span class="token punctuation">,</span> <span class="token string">'推荐'</span><span class="token punctuation">]</span><span class="token punctuation">}</span>
collection<span class="token punctuation">.</span>update_one<span class="token punctuation">(</span><span class="token punctuation">{</span><span class="token string">'url'</span><span class="token punctuation">:</span> data<span class="token punctuation">[</span><span class="token string">'url'</span><span class="token punctuation">]</span><span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token punctuation">{</span><span class="token string">'$set'</span><span class="token punctuation">:</span> data<span class="token punctuation">}</span><span class="token punctuation">,</span> upsert<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>
</code></pre> </li> 
  </ul> 
  <h4>9.4 Redis 用作去重与短期缓存</h4> 
  <ul> 
   <li> <p><strong>Redis</strong>:键值存储,支持超高并发访问,非常适合做指纹去重、短期缓存、队列等。</p> </li> 
   <li> <p><strong>常见策略</strong>:</p> 
    <ol> 
     <li><strong>布隆过滤器(Bloom Filter)</strong>:当 URL 数量达到数百万级别时,普通 Python 集合会占用大量内存,布隆过滤器用空间换时间,以极少内存判断某个 URL 是否已爬取(有一定误判率)。可以使用 <code>pybloom-live</code> 或直接在 Redis 中搭建 Bloom Filter(如 RedisBloom 模块)。</li> 
     <li><strong>Redis Set</strong>:小规模去重可直接用 Redis set 存储已爬 URL。</li> 
    </ol> <pre><code class="prism language-python"><span class="token keyword">import</span> redis

r <span class="token operator">=</span> redis<span class="token punctuation">.</span>Redis<span class="token punctuation">(</span>host<span class="token operator">=</span><span class="token string">'localhost'</span><span class="token punctuation">,</span> port<span class="token operator">=</span><span class="token number">6379</span><span class="token punctuation">,</span> db<span class="token operator">=</span><span class="token number">0</span><span class="token punctuation">)</span>
url <span class="token operator">=</span> <span class="token string">'https://example.com/page/1'</span>
<span class="token comment"># 尝试添加到 set,返回 1 表示新添加,返回 0 表示已存在</span>
<span class="token keyword">if</span> r<span class="token punctuation">.</span>sadd<span class="token punctuation">(</span><span class="token string">'visited_urls'</span><span class="token punctuation">,</span> url<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'新 URL,可爬取'</span><span class="token punctuation">)</span>
<span class="token keyword">else</span><span class="token punctuation">:</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'URL 已存在,跳过'</span><span class="token punctuation">)</span>
</code></pre> </li> 
  </ul> 
  <h4>9.5 去重策略:指纹、哈希、Bloom Filter</h4> 
  <ul> 
   <li> <p><strong>指纹</strong>:通常对 URL 做标准化(去掉排序不同但内容相同的参数、多余的斜杠),然后对标准化后 URL 做哈希(如 MD5、SHA1),存到 Set 中对比。</p> </li> 
   <li> <p><strong>Bloom Filter</strong>:一种以极少内存做到高效去重的概率算法,对大规模 URL 判断去重十分划算,但有极小误判率(可能会把未访问的 URL 误判为已访问)。</p> </li> 
   <li> <p><strong>库推荐</strong>:</p> 
    <ul> 
     <li><code>pybloom-live</code>:纯 Python 布隆过滤器库;</li> 
     <li><code>redis-py-bloom</code> 或 Redis 官方 <code>RedisBloom</code> 模块(需 Redis 安装相应扩展);</li> 
     <li>Scrapy 内置 <code>scrapy.dupefilters.RFPDupeFilter</code>,默认用的是文件或 Redis 存储的指纹去重。</li> 
    </ul> </li> 
  </ul> 
  <hr> 
  <p></p> 
  <h3>10. 分布式爬虫:Scrapy-Redis 与分布式调度</h3> 
  <p>当单机爬虫难以满足高并发、大规模抓取时,就需要分布式爬虫,将任务分布到多台机器协同完成。Scrapy-Redis 是 Scrapy 官方推荐的分布式方案之一。</p> 
  <h4>10.1 为什么要做分布式?</h4> 
  <ul> 
   <li><strong>海量链接</strong>:需要抓取数百万、上亿条 URL 时,单机进程/线程或协程都难以在可接受时间内完成。</li> 
   <li><strong>速度要求</strong>:需要更短时间内获取全量数据,提高爬取速度。</li> 
   <li><strong>容错与扩展</strong>:分布式部署可实现节点增减、机器故障自愈等。</li> 
  </ul> 
  <h4>10.2 Scrapy-Redis 简介与安装</h4> 
  <ul> 
   <li> <p><strong>Scrapy-Redis</strong>:基于 Redis 存储队列与去重指纹,实现分布式调度、分布式去重、数据共享的 Scrapy 扩展。</p> </li> 
   <li> <p><strong>安装</strong>:</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> scrapy-redis
</code></pre> </li> 
  </ul> 
  <h4>10.3 分布式去重队列与调度</h4> 
  <ol> 
   <li> <p><strong>在 Scrapy 项目中集成 Scrapy-Redis</strong></p> 
    <ul> 
     <li> <p>修改 <code>settings.py</code>:</p> <pre><code class="prism language-python"><span class="token comment"># settings.py</span>
<span class="token comment"># 使用 redis 作为调度器</span>
SCHEDULER <span class="token operator">=</span> <span class="token string">"scrapy_redis.scheduler.Scheduler"</span>
<span class="token comment"># 每次爬虫重启时是否继续未爬取完的爬取队列</span>
SCHEDULER_PERSIST <span class="token operator">=</span> <span class="token boolean">True</span>
<span class="token comment"># 使用 redis 去重(替换默认的 RFPDupeFilter)</span>
DUPEFILTER_CLASS <span class="token operator">=</span> <span class="token string">"scrapy_redis.dupefilter.RFPDupeFilter"</span>
<span class="token comment"># 指定 redis 链接地址</span>
REDIS_URL <span class="token operator">=</span> <span class="token string">'redis://:password@127.0.0.1:6379/0'</span>
<span class="token comment"># 将 item 存入 redis 由其他进程或管道处理</span>
ITEM_PIPELINES <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'scrapy_redis.pipelines.RedisPipeline'</span><span class="token punctuation">:</span> <span class="token number">300</span>
<span class="token punctuation">}</span>
<span class="token comment"># 指定用来存储队列的 redis key 前缀</span>
REDIS_ITEMS_KEY <span class="token operator">=</span> <span class="token string">'%(spider)s:items'</span>
REDIS_START_URLS_KEY <span class="token operator">=</span> <span class="token string">'%(name)s:start_urls'</span>
</code></pre> </li> 
    </ul> </li> 
   <li> <p><strong>修改 Spider</strong></p> 
    <ul> 
     <li>继承 <code>scrapy_redis.spiders.RedisSpider</code> 或 <code>RedisCrawlSpider</code>,将原本的 <code>start_urls</code> 替换为从 Redis 队列中获取种子 URL。</li> 
    </ul> <pre><code class="prism language-python"><span class="token comment"># myproject/spiders/redis_quotes.py</span>

<span class="token keyword">from</span> scrapy_redis<span class="token punctuation">.</span>spiders <span class="token keyword">import</span> RedisSpider
<span class="token keyword">from</span> myproject<span class="token punctuation">.</span>items <span class="token keyword">import</span> MyprojectItem

<span class="token keyword">class</span> <span class="token class-name">RedisQuotesSpider</span><span class="token punctuation">(</span>RedisSpider<span class="token punctuation">)</span><span class="token punctuation">:</span>
    name <span class="token operator">=</span> <span class="token string">'redis_quotes'</span>
    <span class="token comment"># Redis 中存放 start_urls 的 key</span>
    redis_key <span class="token operator">=</span> <span class="token string">'redis_quotes:start_urls'</span>

    <span class="token keyword">def</span> <span class="token function">parse</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> response<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token keyword">for</span> quote <span class="token keyword">in</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.quote'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
            item <span class="token operator">=</span> MyprojectItem<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'text'</span><span class="token punctuation">]</span> <span class="token operator">=</span> quote<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'span.text::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'author'</span><span class="token punctuation">]</span> <span class="token operator">=</span> quote<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'small.author::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'tags'</span><span class="token punctuation">]</span> <span class="token operator">=</span> quote<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.tags a.tag::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>getall<span class="token punctuation">(</span><span class="token punctuation">)</span>
            <span class="token keyword">yield</span> item

        next_page <span class="token operator">=</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'li.next a::attr(href)'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">if</span> next_page<span class="token punctuation">:</span>
            <span class="token keyword">yield</span> response<span class="token punctuation">.</span>follow<span class="token punctuation">(</span>next_page<span class="token punctuation">,</span> callback<span class="token operator">=</span>self<span class="token punctuation">.</span>parse<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>将种子 URL 推入 Redis</strong></p> 
    <ul> 
     <li> <p>在本地或远程机器上,用 <code>redis-cli</code> 将种子 URL 推入列表:</p> <pre><code class="prism language-bash">redis-cli
lpush redis_quotes:start_urls <span class="token string">"https://quotes.toscrape.com/"</span>
</code></pre> </li> 
    </ul> </li> 
   <li> <p><strong>启动分布式爬虫</strong></p> 
    <ul> 
     <li> <p>在多台服务器或多终端分别启动爬虫:</p> <pre><code class="prism language-bash">scrapy crawl redis_quotes
</code></pre> </li> 
     <li> <p>所有实例会从同一个 Redis 队列中获取 URL,去重也基于 Redis,互不重复。</p> </li> 
    </ul> </li> 
  </ol> 
  <h4>10.4 多机协作示例</h4> 
  <ol> 
   <li> <p>部署多台服务器(A、B、C),都能访问同一个 Redis 实例。</p> </li> 
   <li> <p>在 A 机上运行:</p> <pre><code class="prism language-bash">redis-server  <span class="token comment"># 启动 Redis(可独立部署)</span>
</code></pre> </li> 
   <li> <p>在 A、B、C 机上,各自拉取完整的 Scrapy 项目代码,并配置好 <code>settings.py</code> 中的 <code>REDIS_URL</code>。</p> </li> 
   <li> <p>在 A 机或任意一处,将种子 URL 塞入 Redis:</p> <pre><code class="prism language-bash">redis-cli <span class="token parameter variable">-h</span> A_ip <span class="token parameter variable">-p</span> <span class="token number">6379</span> lpush redis_quotes:start_urls <span class="token string">"https://quotes.toscrape.com/"</span>
</code></pre> </li> 
   <li> <p>在 A、B、C 分别运行:</p> <pre><code class="prism language-bash">scrapy crawl redis_quotes
</code></pre> 
    <ul> 
     <li>三台机器会自动协调,每台都从 Redis 队列中取 URL,去重也由 Redis 统一维护。</li> 
    </ul> </li> 
   <li> <p>数据收集:</p> 
    <ul> 
     <li>爬取的 Item 通过 <code>RedisPipeline</code> 自动存入 Redis 列表(key: <code>quotes:items</code>);</li> 
     <li>之后可通过独立脚本或 pipeline 再将数据持久化到数据库/文件。</li> 
    </ul> </li> 
  </ol> 
  <hr> 
  <p></p> 
  <h3>11. 常见反爬与反制策略</h3> 
  <h4>11.1 频率限制与请求头伪装</h4> 
  <ol> 
   <li> <p><strong>访问频率控制(限速)</strong></p> 
    <ul> 
     <li> <p>对目标站设置随机或固定延时:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> time<span class="token punctuation">,</span> random
time<span class="token punctuation">.</span>sleep<span class="token punctuation">(</span>random<span class="token punctuation">.</span>uniform<span class="token punctuation">(</span><span class="token number">1</span><span class="token punctuation">,</span> <span class="token number">3</span><span class="token punctuation">)</span><span class="token punctuation">)</span>  <span class="token comment"># 随机等待 1~3 秒</span>
</code></pre> </li> 
     <li> <p>Scrapy 中使用 <code>DOWNLOAD_DELAY</code>、<code>AUTOTHROTTLE_ENABLED</code> 等。</p> </li> 
    </ul> </li> 
   <li> <p><strong>User-Agent 伪装</strong></p> 
    <ul> 
     <li>通过随机 User-Agent 模拟不同浏览器。</li> 
     <li>代码示例见第 4.6 节。</li> 
    </ul> </li> 
   <li> <p><strong>Referer、Accept-Language、Accept-Encoding 等 Headers</strong></p> 
    <ul> 
     <li> <p>模拟真实浏览器请求时携带的完整 Header:</p> <pre><code class="prism language-python">headers <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'User-Agent'</span><span class="token punctuation">:</span> <span class="token string">'Mozilla/5.0 ...'</span><span class="token punctuation">,</span>
    <span class="token string">'Referer'</span><span class="token punctuation">:</span> <span class="token string">'https://example.com/'</span><span class="token punctuation">,</span>
    <span class="token string">'Accept-Language'</span><span class="token punctuation">:</span> <span class="token string">'zh-CN,zh;q=0.9,en;q=0.8'</span><span class="token punctuation">,</span>
    <span class="token string">'Accept-Encoding'</span><span class="token punctuation">:</span> <span class="token string">'gzip, deflate, br'</span><span class="token punctuation">,</span>
    <span class="token comment"># 如有需要,可带上 Cookie</span>
    <span class="token string">'Cookie'</span><span class="token punctuation">:</span> <span class="token string">'sessionid=xxx; other=yyy'</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>
response <span class="token operator">=</span> requests<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> headers<span class="token operator">=</span>headers<span class="token punctuation">)</span>
</code></pre> </li> 
    </ul> </li> 
  </ol> 
  <h4>11.2 登录验证与 Cookie 管理</h4> 
  <ul> 
   <li> <p><strong>Session 对象</strong>:在 <code>requests</code> 中,使用 <code>requests.Session()</code> 方便统一管理 Cookie。</p> </li> 
   <li> <p><strong>模拟登录流程</strong>:</p> 
    <ol> 
     <li>获取登录页 <code>GET</code> 请求,拿到隐藏的 token(如 CSRF);</li> 
     <li>结合用户名/密码、token,<code>POST</code> 到登录接口;</li> 
     <li>成功后,<code>session</code> 内部有了 Cookie,后续使用同一 session 发起请求即可保持登录状态。</li> 
    </ol> </li> 
   <li> <p><strong>带 Cookie 抓取</strong>:</p> <pre><code class="prism language-python">session <span class="token operator">=</span> requests<span class="token punctuation">.</span>Session<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token comment"># 第一次请求,拿到 CSRF Token</span>
login_page <span class="token operator">=</span> session<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://example.com/login'</span><span class="token punctuation">)</span>
<span class="token comment"># 用 BeautifulSoup 解析隐藏 token</span>
<span class="token keyword">from</span> bs4 <span class="token keyword">import</span> BeautifulSoup
soup <span class="token operator">=</span> BeautifulSoup<span class="token punctuation">(</span>login_page<span class="token punctuation">.</span>text<span class="token punctuation">,</span> <span class="token string">'lxml'</span><span class="token punctuation">)</span>
token <span class="token operator">=</span> soup<span class="token punctuation">.</span>find<span class="token punctuation">(</span><span class="token string">'input'</span><span class="token punctuation">,</span> <span class="token punctuation">{</span><span class="token string">'name'</span><span class="token punctuation">:</span> <span class="token string">'csrf_token'</span><span class="token punctuation">}</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token string">'value'</span><span class="token punctuation">]</span>

<span class="token comment"># 构造登录表单</span>
data <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'username'</span><span class="token punctuation">:</span> <span class="token string">'yourname'</span><span class="token punctuation">,</span>
    <span class="token string">'password'</span><span class="token punctuation">:</span> <span class="token string">'yourpwd'</span><span class="token punctuation">,</span>
    <span class="token string">'csrf_token'</span><span class="token punctuation">:</span> token
<span class="token punctuation">}</span>
<span class="token comment"># 登录</span>
session<span class="token punctuation">.</span>post<span class="token punctuation">(</span><span class="token string">'https://example.com/login'</span><span class="token punctuation">,</span> data<span class="token operator">=</span>data<span class="token punctuation">,</span> headers<span class="token operator">=</span><span class="token punctuation">{</span><span class="token string">'User-Agent'</span><span class="token punctuation">:</span> <span class="token string">'...'</span><span class="token punctuation">}</span><span class="token punctuation">)</span>
<span class="token comment"># 登录成功后用 session 继续抓取需要登录才能访问的页面</span>
profile <span class="token operator">=</span> session<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://example.com/profile'</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>profile<span class="token punctuation">.</span>text<span class="token punctuation">)</span>
</code></pre> </li> 
  </ul> 
  <h4>11.3 验证码识别(简单介绍)</h4> 
  <ul> 
   <li> <p><strong>常见验证码类型</strong>:</p> 
    <ul> 
     <li>验证码图片(扭曲字母/数字);</li> 
     <li>滑动验证码(拼图/拖动)</li> 
     <li>点选验证码(选特定图像)</li> 
     <li>行为生物特征(人机验证)</li> 
    </ul> </li> 
   <li> <p><strong>常用方案</strong>:</p> 
    <ol> 
     <li> <p><strong>简单 OCR 识别</strong>:用 <code>pytesseract</code> 对简单数字/字母验证码进行识别,但对扭曲度高或干扰线多的验证码成功率不高。</p> <pre><code class="prism language-bash">pip <span class="token function">install</span> pytesseract pillow
</code></pre> <pre><code class="prism language-python"><span class="token keyword">from</span> PIL <span class="token keyword">import</span> Image
<span class="token keyword">import</span> pytesseract

img <span class="token operator">=</span> Image<span class="token punctuation">.</span><span class="token builtin">open</span><span class="token punctuation">(</span><span class="token string">'captcha.png'</span><span class="token punctuation">)</span>
text <span class="token operator">=</span> pytesseract<span class="token punctuation">.</span>image_to_string<span class="token punctuation">(</span>img<span class="token punctuation">)</span><span class="token punctuation">.</span>strip<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'识别结果:'</span><span class="token punctuation">,</span> text<span class="token punctuation">)</span>
</code></pre> </li> 
     <li> <p><strong>打码平台/人工打码</strong>:当验证码过于复杂时,可调用第三方打码平台 API(如超级鹰、打码兔等),将图片发送给平台,由平台返回识别结果;或者简单地由人工识别。</p> </li> 
     <li> <p><strong>绕过/获取接口</strong>:很多网站的登录并不真用验证码进行提交,而是在前端校验。可以抓包找到真实的登录接口,模拟接口请求,绕过验证码。</p> </li> 
    </ol> </li> 
  </ul> 
  <h4>11.4 代理 IP 池的搭建与旋转</h4> 
  <ol> 
   <li> <p><strong>为什么要用代理</strong></p> 
    <ul> 
     <li>同一 IP 短时间内请求次数过多容易被封禁;使用代理 IP 池可以不断切换 IP,降低单 IP 请求频率。</li> 
    </ul> </li> 
   <li> <p><strong>获取代理</strong></p> 
    <ul> 
     <li><strong>免费代理</strong>:网上公开的免费代理 IP,但一般不稳定、易失效。可用爬虫定期从免费代理网站(如 xicidaili、kuaidaili)抓取可用代理,并验证可用性。</li> 
     <li><strong>付费代理</strong>:阿布云、快代理等付费代理服务,更稳定、更安全。</li> 
    </ul> </li> 
   <li> <p><strong>搭建本地简单代理池示例</strong>(以免费代理为例,仅供学习)</p> <pre><code class="prism language-python"><span class="token keyword">import</span> requests
<span class="token keyword">from</span> lxml <span class="token keyword">import</span> etree
<span class="token keyword">import</span> random
<span class="token keyword">import</span> time

<span class="token keyword">def</span> <span class="token function">fetch_free_proxies</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
    url <span class="token operator">=</span> <span class="token string">'https://www.kuaidaili.com/free/inha/1/'</span>
    headers <span class="token operator">=</span> <span class="token punctuation">{</span><span class="token string">'User-Agent'</span><span class="token punctuation">:</span> <span class="token string">'Mozilla/5.0 ...'</span><span class="token punctuation">}</span>
    resp <span class="token operator">=</span> requests<span class="token punctuation">.</span>get<span class="token punctuation">(</span>url<span class="token punctuation">,</span> headers<span class="token operator">=</span>headers<span class="token punctuation">)</span>
    tree <span class="token operator">=</span> etree<span class="token punctuation">.</span>HTML<span class="token punctuation">(</span>resp<span class="token punctuation">.</span>text<span class="token punctuation">)</span>
    proxies <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">]</span>
    <span class="token keyword">for</span> row <span class="token keyword">in</span> tree<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'//table//tr'</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">:</span><span class="token punctuation">]</span><span class="token punctuation">:</span>
        ip <span class="token operator">=</span> row<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'./td[1]/text()'</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span>
        port <span class="token operator">=</span> row<span class="token punctuation">.</span>xpath<span class="token punctuation">(</span><span class="token string">'./td[2]/text()'</span><span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span>
        proxy <span class="token operator">=</span> <span class="token string-interpolation"><span class="token string">f'http://</span><span class="token interpolation"><span class="token punctuation">{</span>ip<span class="token punctuation">}</span></span><span class="token string">:</span><span class="token interpolation"><span class="token punctuation">{</span>port<span class="token punctuation">}</span></span><span class="token string">'</span></span>
        <span class="token comment"># 简单校验</span>
        <span class="token keyword">try</span><span class="token punctuation">:</span>
            r <span class="token operator">=</span> requests<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://httpbin.org/ip'</span><span class="token punctuation">,</span> proxies<span class="token operator">=</span><span class="token punctuation">{</span><span class="token string">'http'</span><span class="token punctuation">:</span> proxy<span class="token punctuation">,</span> <span class="token string">'https'</span><span class="token punctuation">:</span> proxy<span class="token punctuation">}</span><span class="token punctuation">,</span> timeout<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">)</span>
            <span class="token keyword">if</span> r<span class="token punctuation">.</span>status_code <span class="token operator">==</span> <span class="token number">200</span><span class="token punctuation">:</span>
                proxies<span class="token punctuation">.</span>append<span class="token punctuation">(</span>proxy<span class="token punctuation">)</span>
        <span class="token keyword">except</span><span class="token punctuation">:</span>
            <span class="token keyword">continue</span>
    <span class="token keyword">return</span> proxies

<span class="token keyword">def</span> <span class="token function">get_random_proxy</span><span class="token punctuation">(</span>proxies<span class="token punctuation">)</span><span class="token punctuation">:</span>
    <span class="token keyword">return</span> random<span class="token punctuation">.</span>choice<span class="token punctuation">(</span>proxies<span class="token punctuation">)</span> <span class="token keyword">if</span> proxies <span class="token keyword">else</span> <span class="token boolean">None</span>

<span class="token keyword">if</span> __name__ <span class="token operator">==</span> <span class="token string">'__main__'</span><span class="token punctuation">:</span>
    proxy_list <span class="token operator">=</span> fetch_free_proxies<span class="token punctuation">(</span><span class="token punctuation">)</span>
    <span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">'可用代理:'</span><span class="token punctuation">,</span> proxy_list<span class="token punctuation">)</span>
    <span class="token comment"># 实际爬虫中使用示例:</span>
    proxy <span class="token operator">=</span> get_random_proxy<span class="token punctuation">(</span>proxy_list<span class="token punctuation">)</span>
    <span class="token keyword">if</span> proxy<span class="token punctuation">:</span>
        resp <span class="token operator">=</span> requests<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'https://example.com'</span><span class="token punctuation">,</span> proxies<span class="token operator">=</span><span class="token punctuation">{</span><span class="token string">'http'</span><span class="token punctuation">:</span> proxy<span class="token punctuation">,</span> <span class="token string">'https'</span><span class="token punctuation">:</span> proxy<span class="token punctuation">}</span><span class="token punctuation">,</span> timeout<span class="token operator">=</span><span class="token number">10</span><span class="token punctuation">)</span>
        <span class="token keyword">print</span><span class="token punctuation">(</span>resp<span class="token punctuation">.</span>status_code<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>在 Scrapy 中配置代理</strong></p> 
    <ul> 
     <li> <p>简单在 <code>settings.py</code> 中设置:</p> <pre><code class="prism language-python"><span class="token comment"># settings.py</span>
<span class="token comment"># 下载中间件(若自定义 proxy pool、user-agent,则参照上文中间件示例)</span>
DOWNLOADER_MIDDLEWARES <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware'</span><span class="token punctuation">:</span> <span class="token number">110</span><span class="token punctuation">,</span>
    <span class="token string">'myproject.middlewares.RandomProxyMiddleware'</span><span class="token punctuation">:</span> <span class="token number">100</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>
<span class="token comment"># 代理列表</span>
PROXY_LIST <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token string">'http://ip1:port1'</span><span class="token punctuation">,</span>
    <span class="token string">'http://ip2:port2'</span><span class="token punctuation">,</span>
    <span class="token comment"># ...</span>
<span class="token punctuation">]</span>
</code></pre> </li> 
     <li> <p>自定义 <code>RandomProxyMiddleware</code>:</p> <pre><code class="prism language-python"><span class="token comment"># myproject/middlewares.py</span>

<span class="token keyword">import</span> random

<span class="token keyword">class</span> <span class="token class-name">RandomProxyMiddleware</span><span class="token punctuation">:</span>
    <span class="token keyword">def</span> <span class="token function">__init__</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> proxies<span class="token punctuation">)</span><span class="token punctuation">:</span>
        self<span class="token punctuation">.</span>proxies <span class="token operator">=</span> proxies

    <span class="token decorator annotation punctuation">@classmethod</span>
    <span class="token keyword">def</span> <span class="token function">from_crawler</span><span class="token punctuation">(</span>cls<span class="token punctuation">,</span> crawler<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token keyword">return</span> cls<span class="token punctuation">(</span>
            proxies<span class="token operator">=</span>crawler<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'PROXY_LIST'</span><span class="token punctuation">)</span>
        <span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">process_request</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> request<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        proxy <span class="token operator">=</span> random<span class="token punctuation">.</span>choice<span class="token punctuation">(</span>self<span class="token punctuation">.</span>proxies<span class="token punctuation">)</span>
        request<span class="token punctuation">.</span>meta<span class="token punctuation">[</span><span class="token string">'proxy'</span><span class="token punctuation">]</span> <span class="token operator">=</span> proxy
</code></pre> </li> 
     <li> <p>这样 Scrapy 在每次请求时会随机从 <code>PROXY_LIST</code> 中取一个代理。</p> </li> 
    </ul> </li> 
  </ol> 
  <hr> 
  <p></p> 
  <h3>12. 完整案例:爬取某新闻网站并存入数据库</h3> 
  <p>本节以“爬取某模拟新闻网站(示例:<code>https://news.example.com</code>)的头条新闻,并将标题、摘要、链接存入 MySQL 数据库”为例,完整演示 Scrapy + MySQL 的使用。</p> 
  <h4>12.1 需求分析</h4> 
  <ol> 
   <li><strong>目标数据</strong>:新闻标题、摘要(简介)、文章链接、发布时间。</li> 
   <li><strong>爬取范围</strong>:首页头条新闻(假设分页结构或动态加载,可视情况调整)。</li> 
   <li><strong>存储方式</strong>:MySQL 数据库,表名 <code>headline_news</code>,字段:<code>id, title, summary, url, pub_date</code>。</li> 
   <li><strong>反爬策略</strong>:设置随机 User-Agent、下载延时、简单 IP 伪装。</li> 
  </ol> 
  <h4>12.2 使用 Scrapy + MySQL 完整实现</h4> 
  <ol> 
   <li> <p><strong>创建 Scrapy 项目</strong></p> <pre><code class="prism language-bash">scrapy startproject news_spider
<span class="token builtin class-name">cd</span> news_spider
</code></pre> </li> 
   <li> <p><strong>安装依赖</strong></p> <pre><code class="prism language-bash">pip <span class="token function">install</span> scrapy pymysql
</code></pre> </li> 
   <li> <p><strong>定义 Item</strong> (<code>news_spider/items.py</code>)</p> <pre><code class="prism language-python"><span class="token keyword">import</span> scrapy

<span class="token keyword">class</span> <span class="token class-name">NewsSpiderItem</span><span class="token punctuation">(</span>scrapy<span class="token punctuation">.</span>Item<span class="token punctuation">)</span><span class="token punctuation">:</span>
    title <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
    summary <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
    url <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
    pub_date <span class="token operator">=</span> scrapy<span class="token punctuation">.</span>Field<span class="token punctuation">(</span><span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>设置 MySQL 配置</strong> (<code>news_spider/settings.py</code>)</p> <pre><code class="prism language-python"><span class="token comment"># Database settings</span>
MYSQL_HOST <span class="token operator">=</span> <span class="token string">'localhost'</span>
MYSQL_PORT <span class="token operator">=</span> <span class="token number">3306</span>
MYSQL_USER <span class="token operator">=</span> <span class="token string">'root'</span>
MYSQL_PASSWORD <span class="token operator">=</span> <span class="token string">'root'</span>
MYSQL_DB <span class="token operator">=</span> <span class="token string">'news_db'</span>
MYSQL_CHARSET <span class="token operator">=</span> <span class="token string">'utf8mb4'</span>

<span class="token comment"># Item Pipeline</span>
ITEM_PIPELINES <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'news_spider.pipelines.MySQLPipeline'</span><span class="token punctuation">:</span> <span class="token number">300</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>

<span class="token comment"># Download settings</span>
ROBOTSTXT_OBEY <span class="token operator">=</span> <span class="token boolean">True</span>
DOWNLOAD_DELAY <span class="token operator">=</span> <span class="token number">1</span>
CONCURRENT_REQUESTS <span class="token operator">=</span> <span class="token number">8</span>
USER_AGENTS_LIST <span class="token operator">=</span> <span class="token punctuation">[</span>
    <span class="token string">'Mozilla/5.0 ... Chrome/100.0 ...'</span><span class="token punctuation">,</span>
    <span class="token string">'Mozilla/5.0 ... Firefox/110.0 ...'</span><span class="token punctuation">,</span>
    <span class="token comment"># 可自行补充</span>
<span class="token punctuation">]</span>
DOWNLOADER_MIDDLEWARES <span class="token operator">=</span> <span class="token punctuation">{</span>
    <span class="token string">'news_spider.middlewares.RandomUserAgentMiddleware'</span><span class="token punctuation">:</span> <span class="token number">400</span><span class="token punctuation">,</span>
    <span class="token string">'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware'</span><span class="token punctuation">:</span> <span class="token boolean">None</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span>
</code></pre> </li> 
   <li> <p><strong>自定义中间件:随机 User-Agent</strong> (<code>news_spider/middlewares.py</code>)</p> <pre><code class="prism language-python"><span class="token keyword">import</span> random

<span class="token keyword">class</span> <span class="token class-name">RandomUserAgentMiddleware</span><span class="token punctuation">:</span>
    <span class="token keyword">def</span> <span class="token function">__init__</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> user_agents<span class="token punctuation">)</span><span class="token punctuation">:</span>
        self<span class="token punctuation">.</span>user_agents <span class="token operator">=</span> user_agents

    <span class="token decorator annotation punctuation">@classmethod</span>
    <span class="token keyword">def</span> <span class="token function">from_crawler</span><span class="token punctuation">(</span>cls<span class="token punctuation">,</span> crawler<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token keyword">return</span> cls<span class="token punctuation">(</span>
            user_agents<span class="token operator">=</span>crawler<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'USER_AGENTS_LIST'</span><span class="token punctuation">)</span>
        <span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">process_request</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> request<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        ua <span class="token operator">=</span> random<span class="token punctuation">.</span>choice<span class="token punctuation">(</span>self<span class="token punctuation">.</span>user_agents<span class="token punctuation">)</span>
        request<span class="token punctuation">.</span>headers<span class="token punctuation">.</span>setdefault<span class="token punctuation">(</span><span class="token string">'User-Agent'</span><span class="token punctuation">,</span> ua<span class="token punctuation">)</span>
</code></pre> </li> 
   <li> <p><strong>MySQL Pipeline</strong> (<code>news_spider/pipelines.py</code>)</p> <pre><code class="prism language-python"><span class="token keyword">import</span> pymysql
<span class="token keyword">from</span> pymysql<span class="token punctuation">.</span>err <span class="token keyword">import</span> IntegrityError

<span class="token keyword">class</span> <span class="token class-name">MySQLPipeline</span><span class="token punctuation">:</span>
    <span class="token keyword">def</span> <span class="token function">open_spider</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token comment"># 连接数据库</span>
        self<span class="token punctuation">.</span>conn <span class="token operator">=</span> pymysql<span class="token punctuation">.</span>connect<span class="token punctuation">(</span>
            host<span class="token operator">=</span>spider<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'MYSQL_HOST'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
            port<span class="token operator">=</span>spider<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'MYSQL_PORT'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
            user<span class="token operator">=</span>spider<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'MYSQL_USER'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
            password<span class="token operator">=</span>spider<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'MYSQL_PASSWORD'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
            db<span class="token operator">=</span>spider<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'MYSQL_DB'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
            charset<span class="token operator">=</span>spider<span class="token punctuation">.</span>settings<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'MYSQL_CHARSET'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
            cursorclass<span class="token operator">=</span>pymysql<span class="token punctuation">.</span>cursors<span class="token punctuation">.</span>DictCursor
        <span class="token punctuation">)</span>
        self<span class="token punctuation">.</span>cursor <span class="token operator">=</span> self<span class="token punctuation">.</span>conn<span class="token punctuation">.</span>cursor<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token comment"># 创建表</span>
        create_table_sql <span class="token operator">=</span> <span class="token triple-quoted-string string">"""
        CREATE TABLE IF NOT EXISTS headline_news (
            id INT AUTO_INCREMENT PRIMARY KEY,
            title VARCHAR(255),
            summary TEXT,
            url VARCHAR(512) UNIQUE,
            pub_date DATETIME
        ) CHARACTER SET utf8mb4;
        """</span>
        self<span class="token punctuation">.</span>cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span>create_table_sql<span class="token punctuation">)</span>
        self<span class="token punctuation">.</span>conn<span class="token punctuation">.</span>commit<span class="token punctuation">(</span><span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">close_spider</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        self<span class="token punctuation">.</span>cursor<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>
        self<span class="token punctuation">.</span>conn<span class="token punctuation">.</span>close<span class="token punctuation">(</span><span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">process_item</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> item<span class="token punctuation">,</span> spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
        insert_sql <span class="token operator">=</span> <span class="token triple-quoted-string string">"""
        INSERT INTO headline_news (title, summary, url, pub_date)
        VALUES (%s, %s, %s, %s)
        """</span>
        <span class="token keyword">try</span><span class="token punctuation">:</span>
            self<span class="token punctuation">.</span>cursor<span class="token punctuation">.</span>execute<span class="token punctuation">(</span>insert_sql<span class="token punctuation">,</span> <span class="token punctuation">(</span>
                item<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'title'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
                item<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'summary'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
                item<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'url'</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
                item<span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token string">'pub_date'</span><span class="token punctuation">)</span>
            <span class="token punctuation">)</span><span class="token punctuation">)</span>
            self<span class="token punctuation">.</span>conn<span class="token punctuation">.</span>commit<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">except</span> IntegrityError<span class="token punctuation">:</span>
            <span class="token comment"># URL 已存在则跳过</span>
            <span class="token keyword">pass</span>
        <span class="token keyword">return</span> item
</code></pre> </li> 
   <li> <p><strong>编写 Spider</strong> (<code>news_spider/spiders/news.py</code>)</p> <pre><code class="prism language-python"><span class="token keyword">import</span> scrapy
<span class="token keyword">from</span> news_spider<span class="token punctuation">.</span>items <span class="token keyword">import</span> NewsSpiderItem

<span class="token keyword">class</span> <span class="token class-name">NewsSpider</span><span class="token punctuation">(</span>scrapy<span class="token punctuation">.</span>Spider<span class="token punctuation">)</span><span class="token punctuation">:</span>
    name <span class="token operator">=</span> <span class="token string">'news'</span>
    allowed_domains <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'news.example.com'</span><span class="token punctuation">]</span>
    start_urls <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'https://news.example.com/'</span><span class="token punctuation">]</span>

    <span class="token keyword">def</span> <span class="token function">parse</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> response<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token comment"># 假设首页头条新闻在 <div class="headline-list"> 下,每个新闻项 <div class="item"></span>
        <span class="token keyword">for</span> news <span class="token keyword">in</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.headline-list div.item'</span><span class="token punctuation">)</span><span class="token punctuation">:</span>
            item <span class="token operator">=</span> NewsSpiderItem<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'title'</span><span class="token punctuation">]</span> <span class="token operator">=</span> news<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'h2.title::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>strip<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'summary'</span><span class="token punctuation">]</span> <span class="token operator">=</span> news<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'p.summary::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>strip<span class="token punctuation">(</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'url'</span><span class="token punctuation">]</span> <span class="token operator">=</span> response<span class="token punctuation">.</span>urljoin<span class="token punctuation">(</span>news<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'a::attr(href)'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
            item<span class="token punctuation">[</span><span class="token string">'pub_date'</span><span class="token punctuation">]</span> <span class="token operator">=</span> news<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'span.pub-date::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>strip<span class="token punctuation">(</span><span class="token punctuation">)</span>  <span class="token comment"># 需后续转换为标准时间</span>
            <span class="token keyword">yield</span> scrapy<span class="token punctuation">.</span>Request<span class="token punctuation">(</span>
                url<span class="token operator">=</span>item<span class="token punctuation">[</span><span class="token string">'url'</span><span class="token punctuation">]</span><span class="token punctuation">,</span>
                callback<span class="token operator">=</span>self<span class="token punctuation">.</span>parse_detail<span class="token punctuation">,</span>
                meta<span class="token operator">=</span><span class="token punctuation">{</span><span class="token string">'item'</span><span class="token punctuation">:</span> item<span class="token punctuation">}</span>
            <span class="token punctuation">)</span>

        <span class="token comment"># 假设分页结构:下一页链接在 <a class="next-page" href="..."></span>
        next_page <span class="token operator">=</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'a.next-page::attr(href)'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span>
        <span class="token keyword">if</span> next_page<span class="token punctuation">:</span>
            <span class="token keyword">yield</span> response<span class="token punctuation">.</span>follow<span class="token punctuation">(</span>next_page<span class="token punctuation">,</span> callback<span class="token operator">=</span>self<span class="token punctuation">.</span>parse<span class="token punctuation">)</span>

    <span class="token keyword">def</span> <span class="token function">parse_detail</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> response<span class="token punctuation">)</span><span class="token punctuation">:</span>
        item <span class="token operator">=</span> response<span class="token punctuation">.</span>meta<span class="token punctuation">[</span><span class="token string">'item'</span><span class="token punctuation">]</span>
        <span class="token comment"># 在详情页可提取更精确的发布时间</span>
        pub_date <span class="token operator">=</span> response<span class="token punctuation">.</span>css<span class="token punctuation">(</span><span class="token string">'div.meta span.date::text'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>get<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">.</span>strip<span class="token punctuation">(</span><span class="token punctuation">)</span>
        item<span class="token punctuation">[</span><span class="token string">'pub_date'</span><span class="token punctuation">]</span> <span class="token operator">=</span> self<span class="token punctuation">.</span>parse_date<span class="token punctuation">(</span>pub_date<span class="token punctuation">)</span>
        <span class="token keyword">yield</span> item

    <span class="token keyword">def</span> <span class="token function">parse_date</span><span class="token punctuation">(</span>self<span class="token punctuation">,</span> date_str<span class="token punctuation">)</span><span class="token punctuation">:</span>
        <span class="token comment"># 假设 date_str 格式为 '2025-05-30 14:30:00'</span>
        <span class="token keyword">from</span> datetime <span class="token keyword">import</span> datetime
        <span class="token keyword">try</span><span class="token punctuation">:</span>
            dt <span class="token operator">=</span> datetime<span class="token punctuation">.</span>strptime<span class="token punctuation">(</span>date_str<span class="token punctuation">,</span> <span class="token string">'%Y-%m-%d %H:%M:%S'</span><span class="token punctuation">)</span>
            <span class="token keyword">return</span> dt
        <span class="token keyword">except</span><span class="token punctuation">:</span>
            <span class="token keyword">return</span> <span class="token boolean">None</span>
</code></pre> </li> 
   <li> <p><strong>运行爬虫</strong></p> 
    <ul> 
     <li> <p>确保 MySQL 已创建数据库 <code>news_db</code>,用户名、密码正确;</p> </li> 
     <li> <p>在项目根目录执行:</p> <pre><code class="prism language-bash">scrapy crawl news
</code></pre> </li> 
     <li> <p>运行期间,日志会显示抓取进度,成功后可在 <code>headline_news</code> 表中查看抓取结果:</p> <pre><code class="prism language-sql"><span class="token keyword">SELECT</span> <span class="token operator">*</span> <span class="token keyword">FROM</span> headline_news <span class="token keyword">LIMIT</span> <span class="token number">10</span><span class="token punctuation">;</span>
</code></pre> </li> 
    </ul> </li> 
  </ol> 
  <h4>12.3 代码详解与常见 Q&A</h4> 
  <ul> 
   <li> <p><strong>Q:为什么要在 <code>parse</code> 方法中发起新的 Request 到详情页?</strong></p> 
    <ul> 
     <li>因为首页展示的数据有限,有些字段(如精确发布时间、作者、正文)要到详情页才能拿到。<code>meta</code> 参数可将部分已抓取的字段传递到下一个回调。</li> 
    </ul> </li> 
   <li> <p><strong>Q:如何将字符串 <code>'2025-05-30 14:30:00'</code> 转为 <code>datetime</code>?</strong></p> 
    <ul> 
     <li>使用 Python 标准库 <code>datetime.strptime</code>,传入对应格式;若格式不一致,可先 <code>strip()</code> 或正则提取。</li> 
    </ul> </li> 
   <li> <p><strong>Q:如果目标网站有登录或验证码怎么办?</strong></p> 
    <ul> 
     <li>可在 <code>start_requests</code> 方法里模拟登录(使用 <code>requests</code> + <code>cookies</code> 或 Selenium),登录后获取 Cookie,再将 Cookie 带入 Scrapy 调用。</li> 
    </ul> </li> 
   <li> <p><strong>Q:如何处理分页数量巨大(上千页)?</strong></p> 
    <ul> 
     <li>可分析 URL 规律(如 <code>page=1,2,3...</code>),使用 <code>for page in range(1, 1001): yield scrapy.Request(...)</code>。注意限速与 IP 轮换,防止被封。</li> 
    </ul> </li> 
   <li> <p><strong>Q:为什么要随机 User-Agent?</strong></p> 
    <ul> 
     <li>防止被网站识别为爬虫。</li> 
    </ul> </li> 
   <li> <p><strong>Q:如何在 Scrapy 中使用代理?</strong></p> 
    <ul> 
     <li>参考第 11.4 节,在 <code>DOWNLOADER_MIDDLEWARES</code> 中配置自己的 <code>RandomProxyMiddleware</code>,或直接使用 Scrapy-Proxy-Pool 等库。</li> 
    </ul> </li> 
  </ul> 
  <hr> 
  <p></p> 
  <h3>13. Python 爬虫相关的常用第三方库一览(截至 2025年6月)</h3> 
  <p>以下对各类常用库进行分类归纳,并附简要说明与典型使用场景。</p> 
  <h4>13.1 基础请求与解析</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>requests</strong></td> 
     <td>同步 HTTP 请求,API 简洁,生态成熟</td> 
     <td>绝大多数简单爬虫,表单提交、Cookie 支持</td> 
    </tr> 
    <tr> 
     <td><strong>httpx</strong></td> 
     <td>支持同步 & 异步的 HTTP 客户端,与 requests 兼容</td> 
     <td>需要异步或更多高级功能时的首选</td> 
    </tr> 
    <tr> 
     <td><strong>aiohttp</strong></td> 
     <td>原生 asyncio 协程模式的 HTTP 客户端</td> 
     <td>高并发抓取、异步爬虫</td> 
    </tr> 
    <tr> 
     <td><strong>urllib3</strong></td> 
     <td>低级 HTTP 客户端,requests 底层依赖</td> 
     <td>需要更底层的控制、定制化管理连接池时</td> 
    </tr> 
    <tr> 
     <td><strong>BeautifulSoup (bs4)</strong></td> 
     <td>HTML/XML 解析,入门简单、灵活</td> 
     <td>初学者快速上手、解析复杂 HTML</td> 
    </tr> 
    <tr> 
     <td><strong>lxml</strong></td> 
     <td>基于 libxml2/libxslt 的高性能解析器,支持 XPath</td> 
     <td>需要高性能、大量数据解析时,结合 XPath 提取</td> 
    </tr> 
    <tr> 
     <td><strong>parsel</strong></td> 
     <td>Scrapy 自带的解析器,支持 CSS/XPath</td> 
     <td>Scrapy 项目中快捷解析、项目外独立解析</td> 
    </tr> 
    <tr> 
     <td><strong>PyQuery</strong></td> 
     <td>类似 jQuery 的解析 API,基于 lxml</td> 
     <td>前端同学更习惯 CSS 选择器,快速上手</td> 
    </tr> 
    <tr> 
     <td><strong>re (正则)</strong></td> 
     <td>Python 内置正则模块,对结构简单的文本进行模式匹配</td> 
     <td>提取邮箱、电话号码、URL、数字等简单模式</td> 
    </tr> 
    <tr> 
     <td><strong>html5lib</strong></td> 
     <td>兼容性最强的解析器(支持容错 HTML),速度相对较慢</td> 
     <td>需要解析结构严重不规范的 HTML 时</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>13.2 浏览器自动化</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>Selenium</strong></td> 
     <td>最成熟的浏览器自动化框架,支持 Chrome、Firefox、Edge 等</td> 
     <td>需模拟用户操作 (点击、滑动、表单提交)、抓取 JS 渲染内容</td> 
    </tr> 
    <tr> 
     <td><strong>Playwright</strong></td> 
     <td>微软出品,继承 Puppeteer,API 简洁,支持多浏览器</td> 
     <td>高性能 headless 模式,异步/同步模式都支持</td> 
    </tr> 
    <tr> 
     <td><strong>Pyppeteer</strong></td> 
     <td>Puppeteer 的 Python 移植版</td> 
     <td>Node.js 用户转 Python 时快速上手</td> 
    </tr> 
    <tr> 
     <td><strong>undetected-chromedriver</strong></td> 
     <td>对抗反爬,屏蔽 Selenium 特征</td> 
     <td>需要更强的逃避检测能力,尤其面对高级反爬</td> 
    </tr> 
    <tr> 
     <td><strong>Splash</strong></td> 
     <td>由 Scrapy-Splash 提供,基于 QtWebKit 的渲染服务</td> 
     <td>Scrapy 与动态渲染结合,用于批量异步渲染</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>13.3 异步爬取</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>asyncio</strong></td> 
     <td>Python 标准库,提供事件循环与异步协程基础</td> 
     <td>编写异步爬虫主框架</td> 
    </tr> 
    <tr> 
     <td><strong>aiohttp</strong></td> 
     <td>基于 asyncio 的 HTTP 客户端</td> 
     <td>高并发抓取、配合 BeautifulSoup/lxml 解析</td> 
    </tr> 
    <tr> 
     <td><strong>httpx</strong></td> 
     <td>支持同步 & 异步,与 requests 接口兼容</td> 
     <td>希望无缝从 requests 切换到异步模式</td> 
    </tr> 
    <tr> 
     <td><strong>trio</strong></td> 
     <td>另一个异步框架,示意图结构友好,但生态相对较小</td> 
     <td>深度研究异步原理或希望新尝试</td> 
    </tr> 
    <tr> 
     <td><strong>curio</strong></td> 
     <td>纯 Python 异步库,强调简洁</td> 
     <td>研究异步 I/O 原理的场景</td> 
    </tr> 
    <tr> 
     <td><strong>aiofiles</strong></td> 
     <td>异步文件操作</td> 
     <td>异步模式下同时要读写大量文件</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>13.4 登录模拟与验证码处理</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>requests</strong> + <strong>Session</strong></td> 
     <td>模拟登录,自动管理 Cookie</td> 
     <td>大部分需要登录后抓取的场景</td> 
    </tr> 
    <tr> 
     <td><strong>selenium</strong></td> 
     <td>浏览器自动化登录,执行 JS,处理复杂登录逻辑</td> 
     <td>登录时有 JS 加密或动态 token</td> 
    </tr> 
    <tr> 
     <td><strong>Playwright</strong></td> 
     <td>与 Selenium 类似,但速度更快,接口更现代</td> 
     <td>更轻量级的浏览器自动化</td> 
    </tr> 
    <tr> 
     <td><strong>pytesseract</strong></td> 
     <td>OCR 识别图片文字</td> 
     <td>简单验证码识别</td> 
    </tr> 
    <tr> 
     <td><strong>captcha_solver</strong></td> 
     <td>第三方打码平台 SDK</td> 
     <td>需要调用付费打码平台处理验证码</td> 
    </tr> 
    <tr> 
     <td><strong>twoCaptcha</strong></td> 
     <td>付费打码平台 Python 客户端</td> 
     <td>需要可靠的验证码打码服务</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>13.5 反爬与代理</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>fake-useragent</strong></td> 
     <td>随机生成 User-Agent</td> 
     <td>防止被识别为爬虫</td> 
    </tr> 
    <tr> 
     <td><strong>scrapy-fake-useragent</strong></td> 
     <td>Scrapy 专用随机 UA 插件</td> 
     <td>Scrapy 项目中一键启用随机 UA</td> 
    </tr> 
    <tr> 
     <td><strong>requests-random-user-agent</strong></td> 
     <td>为 requests 提供随机 UA 支持</td> 
     <td>轻松控制 requests 请求头</td> 
    </tr> 
    <tr> 
     <td><strong>scrapy-rotating-proxies</strong></td> 
     <td>Scrapy 专用代理轮换中间件,用于自动切换代理池(付费或免费)</td> 
     <td>Scrapy 大规模抓取时避免单 IP 封禁</td> 
    </tr> 
    <tr> 
     <td><strong>scrapy-proxies</strong></td> 
     <td>开源 Scrapy 代理中间件,可使用免费代理池</td> 
     <td>入门级 Scrapy 项目快速使用代理</td> 
    </tr> 
    <tr> 
     <td><strong>proxylist2</strong></td> 
     <td>Python 包,从多个免费代理网站抓取代理 IP</td> 
     <td>自动化维护免费代理列表</td> 
    </tr> 
    <tr> 
     <td><strong>requests-redis-rotating-proxies</strong></td> 
     <td>结合 Redis 存储代理列表,实现高可用代理池</td> 
     <td>中大型项目需集中管理代理 IP</td> 
    </tr> 
    <tr> 
     <td><strong>scrapy-user-agents</strong></td> 
     <td>Scrapy 插件,内置常见 UA 列表</td> 
     <td>简化 Scrapy 中的 UA 列表管理</td> 
    </tr> 
    <tr> 
     <td><strong>cfscrape</strong></td> 
     <td>用于绕过 Cloudflare 简易 JS 保护</td> 
     <td>某些站点需要绕过 Cloudflare 5 秒验证页面</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>13.6 分布式调度</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>scrapy-redis</strong></td> 
     <td>Scrapy 分布式爬虫扩展,统一 Redis 作为队列与去重存储</td> 
     <td>分布式 Scrapy 项目</td> 
    </tr> 
    <tr> 
     <td><strong>scrapy-cluster</strong></td> 
     <td>基于 Kafka + Redis 的 Scrapy 分布式爬虫系统</td> 
     <td>企业级分布式环境,需与消息队列协同</td> 
    </tr> 
    <tr> 
     <td><strong>Frigate</strong></td> 
     <td>高性能分布式爬虫,结合 Redis + MongoDB</td> 
     <td>大规模分布式爬取且需要与 NoSQL 存储集成</td> 
    </tr> 
    <tr> 
     <td><strong>PhantomJS + Splash</strong></td> 
     <td>无头浏览器渲染服务,可与 Scrapy 搭配形成分布式渲染环境</td> 
     <td>需要大规模渲染 JS 页面后再抓取</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>13.7 其它有用工具</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>库 名</th> 
     <th>功能简介</th> 
     <th>典型场景</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td><strong>robotparser</strong></td> 
     <td>Python 内置 <code>urllib.robotparser</code>,解析 robots.txt</td> 
     <td>爬虫前先检查 robots.txt</td> 
    </tr> 
    <tr> 
     <td><strong>tldextract</strong></td> 
     <td>提取域名、子域名、后缀</td> 
     <td>需要对 URL 做域名归类或统计时</td> 
    </tr> 
    <tr> 
     <td><strong>url-normalize</strong></td> 
     <td>URL 规范化,去除重复查询参数</td> 
     <td>爬虫过程对 URL 进行标准化去重</td> 
    </tr> 
    <tr> 
     <td><strong>logging</strong></td> 
     <td>Python 标准库,用于日志输出</td> 
     <td>任何爬虫项目都应进行日志记录</td> 
    </tr> 
    <tr> 
     <td><strong>fake_useragent</strong></td> 
     <td>动态获取并生成随机 UA</td> 
     <td>避免 UA 列表过时</td> 
    </tr> 
    <tr> 
     <td><strong>termcolor</strong></td> 
     <td>终端字符着色,调试输出更直观</td> 
     <td>爬虫日志、调试时需要彩色提示</td> 
    </tr> 
    <tr> 
     <td><strong>psutil</strong></td> 
     <td>系统资源监控,可查看 CPU、内存占用</td> 
     <td>长时间运行爬虫时监控资源使用情况</td> 
    </tr> 
    <tr> 
     <td><strong>schedule</strong></td> 
     <td>定时任务库,可定时运行脚本</td> 
     <td>需要定时执行爬虫任务</td> 
    </tr> 
    <tr> 
     <td><strong>watchdog</strong></td> 
     <td>文件系统监控,当文件/目录变化时触发回调</td> 
     <td>实时监控爬取结果文件、触发后续任务</td> 
    </tr> 
   </tbody> 
  </table> 
  <blockquote> 
   <p><strong>说明</strong>:因篇幅所限,上表仅列出截至 2024 年底常用或较为稳定的 Python 爬虫库,后续可能有新库或旧库迭代,请根据实际需求及时查阅官方文档或社区资源。</p> 
  </blockquote> 
  <hr> 
  <p></p> 
  <h3>14. 附录</h3> 
  <h4>14.1 常见报错及解决方案</h4> 
  <ol> 
   <li> <p><strong><code>ModuleNotFoundError: No module named 'xxx'</code></strong></p> 
    <ul> 
     <li>原因:未安装该包或安装在全局而非虚拟环境中。</li> 
     <li>解决:确认当前虚拟环境是否已激活,并执行 <code>pip install xxx</code>。</li> 
    </ul> </li> 
   <li> <p><strong><code>requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED]</code></strong></p> 
    <ul> 
     <li> <p>原因:本机 CA 证书有问题,无法验证 HTTPS。</p> </li> 
     <li> <p>解决:</p> 
      <ul> 
       <li>升级 <code>certifi</code>:<code>pip install --upgrade certifi</code>;</li> 
       <li>临时忽略:<code>requests.get(url, verify=False)</code>(不推荐用于生产)。</li> 
      </ul> </li> 
    </ul> </li> 
   <li> <p><strong><code>ValueError: too many values to unpack (expected 2)</code> 在 XPath 返回多值时</strong></p> 
    <ul> 
     <li>原因:使用 <code>for x, y in tree.xpath(...)</code>,但 XPath 返回值数量与预期不符。</li> 
     <li>解决:检查 XPath 语法,或者使用 <code>zip()</code> 将两个列表匹配。</li> 
    </ul> </li> 
   <li> <p><strong><code>selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH</code></strong></p> 
    <ul> 
     <li>原因:<code>chromedriver</code> 未放在系统 PATH,或路径不正确。</li> 
     <li>解决:下载与 Chrome 版本一致的 <code>chromedriver</code>,并将其路径添加到环境变量,或者在代码中指定 <code>executable_path</code>。</li> 
    </ul> </li> 
   <li> <p><strong><code>pymysql.err.OperationalError: (1045, "Access denied for user 'root'@'localhost' (using password: YES)")</code></strong></p> 
    <ul> 
     <li>原因:MySQL 用户名/密码、权限或 MySQL 服务未启动。</li> 
     <li>解决:检查用户名、密码是否正确,MySQL 服务是否运行,数据库名称是否存在。</li> 
    </ul> </li> 
   <li> <p><strong><code>TimeoutError</code> 或 <code>asyncio.exceptions.TimeoutError</code></strong></p> 
    <ul> 
     <li>原因:网络慢或被目标站点限制。</li> 
     <li>解决:加大 <code>timeout</code> 参数,降低并发数,适当设置代理。</li> 
    </ul> </li> 
   <li> <p><strong>UnicodeEncodeError/UnicodeDecodeError</strong></p> 
    <ul> 
     <li>原因:处理的文本编码与 Python 默认编码不一致。</li> 
     <li>解决:明确指定 <code>response.encoding = 'utf-8'</code>,或者在读写文件时加 <code>encoding='utf-8'</code>。</li> 
    </ul> </li> 
  </ol> 
  <h4>14.2 常用 HTTP 状态码速查</h4> 
  <table> 
   <thead> 
    <tr> 
     <th>状态码</th> 
     <th>含义</th> 
    </tr> 
   </thead> 
   <tbody> 
    <tr> 
     <td>200</td> 
     <td>OK,请求成功</td> 
    </tr> 
    <tr> 
     <td>301</td> 
     <td>永久重定向</td> 
    </tr> 
    <tr> 
     <td>302</td> 
     <td>临时重定向</td> 
    </tr> 
    <tr> 
     <td>400</td> 
     <td>Bad Request,请求报文语法错误</td> 
    </tr> 
    <tr> 
     <td>401</td> 
     <td>Unauthorized,需要身份验证</td> 
    </tr> 
    <tr> 
     <td>403</td> 
     <td>Forbidden,服务器拒绝访问(常见反爬屏蔽码)</td> 
    </tr> 
    <tr> 
     <td>404</td> 
     <td>Not Found,资源不存在</td> 
    </tr> 
    <tr> 
     <td>405</td> 
     <td>Method Not Allowed,请求方法被禁止</td> 
    </tr> 
    <tr> 
     <td>408</td> 
     <td>Request Timeout,服务器等待客户端发送请求超时</td> 
    </tr> 
    <tr> 
     <td>429</td> 
     <td>Too Many Requests,客户端请求频率过高</td> 
    </tr> 
    <tr> 
     <td>500</td> 
     <td>Internal Server Error,服务器内部错误</td> 
    </tr> 
    <tr> 
     <td>502</td> 
     <td>Bad Gateway,服务器作为网关或代理时收到上游服务器无效响应</td> 
    </tr> 
    <tr> 
     <td>503</td> 
     <td>Service Unavailable,服务器暂时无法处理请求,常见于流量过大被限流</td> 
    </tr> 
   </tbody> 
  </table> 
  <h4>14.3 学习资源与进阶指南</h4> 
  <ol> 
   <li> <p><strong>官方文档</strong></p> 
    <ul> 
     <li>Requests:https://docs.python-requests.org/</li> 
     <li>BeautifulSoup:http://beautifulsoup.readthedocs.io/</li> 
     <li>Scrapy:https://docs.scrapy.org/</li> 
     <li>Selenium:https://www.selenium.dev/documentation/</li> 
     <li>Playwright:https://playwright.dev/python/</li> 
     <li>aiohttp:https://docs.aiohttp.org/</li> 
     <li>httpx:https://www.python-httpx.org/</li> 
    </ul> </li> 
   <li> <p><strong>推荐书籍</strong></p> 
    <ul> 
     <li>《Python网络数据采集(第二版)》—— Ryan Mitchell</li> 
     <li>《深入Python爬虫框架 Scrapy》—— 黄今</li> 
     <li>《Python3网络爬虫开发实战》—— 石刚</li> 
    </ul> </li> 
   <li> <p><strong>课程与视频</strong></p> 
    <ul> 
     <li>B 站、YouTube 上均有优质 Python 爬虫视频教程(可搜索“Python 爬虫 零基础”、“Scrapy 教程”等)。</li> 
     <li>Coursera/慕课网上的 Python 爬虫进阶课程。</li> 
    </ul> </li> 
   <li> <p><strong>社区资源</strong></p> 
    <ul> 
     <li>Stack Overflow:https://stackoverflow.com/(遇到报错可搜索)</li> 
     <li>SegmentFault:https://segmentfault.com/(国内开发者社区)</li> 
     <li>GitHub Trending:搜索开源爬虫项目,学习最佳实践。</li> 
    </ul> </li> 
  </ol> 
  <hr> 
  <p></p> 
  <h3>15. 总结</h3> 
  <p>本教程从最基础的 <code>requests + BeautifulSoup</code>,到 Scrapy 框架、浏览器自动化、异步爬虫、分布式爬虫,系统梳理了 Python 爬虫的常见技术与实践要点,并盘点了截至 2024 年底的主流库与工具。对于初学者而言,掌握以下几个关键点即可快速上手:</p> 
  <ol> 
   <li><strong>理解 HTTP 基础</strong>:会构造 GET/POST 请求、分析响应;</li> 
   <li><strong>掌握 HTML 解析</strong>:熟悉 BeautifulSoup、lxml(XPath/CSS Selector);</li> 
   <li><strong>尝试 Scrapy</strong>:学会搭建 Scrapy 项目、编写 Spider、Pipeline、Settings,并用 Scrapy Shell 调试;</li> 
   <li><strong>应对动态页面</strong>:熟练使用 Selenium 或 Playwright 抓取 JS 渲染内容,并结合常规解析方法提取数据;</li> 
   <li><strong>探索异步爬虫</strong>:理解协程原理,用 aiohttp、httpx 提升并发性能;</li> 
   <li><strong>数据存储与去重</strong>:掌握 CSV/JSON/SQLite/MySQL/MongoDB 的使用,并做好 URL 去重(集合、Redis、Bloom Filter);</li> 
   <li><strong>反爬与反制</strong>:设置 User-Agent、Referer、下载延时、代理 IP 池等,了解验证码处理思路;</li> 
   <li><strong>分布式爬虫</strong>:学习 Scrapy-Redis,将任务分配到多台机器,提高抓取效率。</li> 
  </ol> 
  <p>最后,爬虫技术更新迅速,截止到本教程编写时(2024 年底)的主流库可能会随着技术迭代、站点反爬升级而发生变化。建议你在入门后,积极关注各大 Python 社区、GitHub Trending 以及官方文档,及时跟进新特性、新库、新思路,不断优化自己的爬虫方案。祝你能在数据抓取的道路上越走越远,愉快地玩转 Python 爬虫世界!</p> 
  <hr> 
  <p><em>创作时间:2025 年 6 月 1 日</em></p> 
 </div> 
</div>
                            </div>
                        </div>
                    </div>
                    <!--PC和WAP自适应版-->
                    <div id="SOHUCS" sid="1929528726098341888"></div>
                    <script type="text/javascript" src="/views/front/js/chanyan.js"></script>
                    <!-- 文章页-底部 动态广告位 -->
                    <div class="youdao-fixed-ad" id="detail_ad_bottom"></div>
                </div>
                <div class="col-md-3">
                    <div class="row" id="ad">
                        <!-- 文章页-右侧1 动态广告位 -->
                        <div id="right-1" class="col-lg-12 col-md-12 col-sm-4 col-xs-4 ad">
                            <div class="youdao-fixed-ad" id="detail_ad_1"> </div>
                        </div>
                        <!-- 文章页-右侧2 动态广告位 -->
                        <div id="right-2" class="col-lg-12 col-md-12 col-sm-4 col-xs-4 ad">
                            <div class="youdao-fixed-ad" id="detail_ad_2"></div>
                        </div>
                        <!-- 文章页-右侧3 动态广告位 -->
                        <div id="right-3" class="col-lg-12 col-md-12 col-sm-4 col-xs-4 ad">
                            <div class="youdao-fixed-ad" id="detail_ad_3"></div>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
    <div class="container">
        <h4 class="pt20 mb15 mt0 border-top">你可能感兴趣的:(python,爬虫,opencv,scipy,scrapy,beautifulsoup,numpy)</h4>
        <div id="paradigm-article-related">
            <div class="recommend-post mb30">
                <ul class="widget-links">
                    <li><a href="/article/1950233451282100224.htm"
                           title="python 读excel每行替换_Python脚本操作Excel实现批量替换功能" target="_blank">python 读excel每行替换_Python脚本操作Excel实现批量替换功能</a>
                        <span class="text-muted">weixin_39646695</span>
<a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E8%AF%BBexcel%E6%AF%8F%E8%A1%8C%E6%9B%BF%E6%8D%A2/1.htm">读excel每行替换</a>
                        <div>Python脚本操作Excel实现批量替换功能大家好,给大家分享下如何使用Python脚本操作Excel实现批量替换。使用的工具Openpyxl,一个处理excel的python库,处理excel,其实针对的就是WorkBook,Sheet,Cell这三个最根本的元素~明确需求原始excel如下我们的目标是把下面excel工作表的sheet1表页A列的内容“替换我吧”批量替换为B列的“我用来替换的</div>
                    </li>
                    <li><a href="/article/1950208107430866944.htm"
                           title="python笔记14介绍几个魔法方法" target="_blank">python笔记14介绍几个魔法方法</a>
                        <span class="text-muted">抢公主的大魔王</span>
<a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a>
                        <div>python笔记14介绍几个魔法方法先声明一下各位大佬,这是我的笔记。如有错误,恳请指正。另外,感谢您的观看,谢谢啦!(1).__doc__输出对应的函数,类的说明文档print(print.__doc__)print(value,...,sep='',end='\n',file=sys.stdout,flush=False)Printsthevaluestoastream,ortosys.std</div>
                    </li>
                    <li><a href="/article/1950204954295726080.htm"
                           title="Anaconda 和 Miniconda:功能详解与选择建议" target="_blank">Anaconda 和 Miniconda:功能详解与选择建议</a>
                        <span class="text-muted">古月฿</span>
<a class="tag" taget="_blank" href="/search/python%E5%85%A5%E9%97%A8/1.htm">python入门</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/conda/1.htm">conda</a>
                        <div>Anaconda和Miniconda详细介绍一、Anaconda的详细介绍1.什么是Anaconda?Anaconda是一个开源的包管理和环境管理工具,在数据科学、机器学习以及科学计算领域发挥着关键作用。它以Python和R语言为基础,为用户精心准备了大量预装库和工具,极大地缩短了搭建数据科学环境的时间。对于那些想要快速开展数据分析、模型训练等工作的人员来说,Anaconda就像是一个一站式的“数</div>
                    </li>
                    <li><a href="/article/1950204701714739200.htm"
                           title="环境搭建 | Python + Anaconda / Miniconda + PyCharm 的安装、配置与使用" target="_blank">环境搭建 | Python + Anaconda / Miniconda + PyCharm 的安装、配置与使用</a>
                        <span class="text-muted"></span>

                        <div>本文将分别介绍Python、Anaconda/Miniconda、PyCharm的安装、配置与使用,详细介绍Python环境搭建的全过程,涵盖Python、Pip、PythonLauncher、Anaconda、Miniconda、Pycharm等内容,以官方文档为参照,使用经验为补充,内容全面而详实。由于图片太多,就先贴一个无图简化版吧,详情请查看Python+Anaconda/Minicond</div>
                    </li>
                    <li><a href="/article/1950202938265759744.htm"
                           title="你竟然还在用克隆删除?Conda最新版rename命令全攻略!" target="_blank">你竟然还在用克隆删除?Conda最新版rename命令全攻略!</a>
                        <span class="text-muted">曦紫沐</span>
<a class="tag" taget="_blank" href="/search/Python%E5%9F%BA%E7%A1%80%E7%9F%A5%E8%AF%86/1.htm">Python基础知识</a><a class="tag" taget="_blank" href="/search/conda/1.htm">conda</a><a class="tag" taget="_blank" href="/search/%E8%99%9A%E6%8B%9F%E7%8E%AF%E5%A2%83%E7%AE%A1%E7%90%86/1.htm">虚拟环境管理</a>
                        <div>文章摘要Conda虚拟环境管理终于迎来革命性升级!本文揭秘Conda4.9+版本新增的rename黑科技,彻底告别传统“克隆+删除”的繁琐操作。从命令解析到实战案例,手把手教你如何安全高效地重命名Python虚拟环境,附带版本检测、环境迁移、故障排查等进阶技巧,助你提升开发效率10倍!一、颠覆认知:Conda居然自带重命名功能?很多开发者仍停留在“Conda无法直接重命名环境”的认知阶段,实际上自</div>
                    </li>
                    <li><a href="/article/1950202054706262016.htm"
                           title="centos7安装配置 Anaconda3" target="_blank">centos7安装配置 Anaconda3</a>
                        <span class="text-muted"></span>

                        <div>Anaconda是一个用于科学计算的Python发行版,Anaconda于Python,相当于centos于linux。下载[root@testsrc]#mwgethttps://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-5.2.0-Linux-x86_64.shBegintodownload:Anaconda3-5.2.0-L</div>
                    </li>
                    <li><a href="/article/1950202054219722752.htm"
                           title="Pandas:数据科学的超级瑞士军刀" target="_blank">Pandas:数据科学的超级瑞士军刀</a>
                        <span class="text-muted">科技林总</span>
<a class="tag" taget="_blank" href="/search/DeepSeek%E5%AD%A6AI/1.htm">DeepSeek学AI</a><a class="tag" taget="_blank" href="/search/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/1.htm">人工智能</a>
                        <div>**——从零基础到高效分析的进化指南**###**一、Pandas诞生:数据革命的救世主****2010年前的数据分析噩梦**:```python#传统Python处理表格数据data=[]forrowincsv_file:ifrow[3]>100androw[2]=="China":data.append(float(row[5])#代码冗长易错!```**核心痛点**:-Excel处理百万行崩</div>
                    </li>
                    <li><a href="/article/1950199910724857856.htm"
                           title="机器学习必备数学与编程指南:从入门到精通" target="_blank">机器学习必备数学与编程指南:从入门到精通</a>
                        <span class="text-muted">a小胡哦</span>
<a class="tag" taget="_blank" href="/search/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%9F%BA%E7%A1%80/1.htm">机器学习基础</a><a class="tag" taget="_blank" href="/search/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/1.htm">机器学习</a><a class="tag" taget="_blank" href="/search/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/1.htm">人工智能</a>
                        <div>一、机器学习核心数学基础1.线性代数(神经网络的基础)必须掌握:矩阵运算(乘法、转置、逆)向量空间与线性变换特征值分解与奇异值分解(SVD)为什么重要:神经网络本质就是矩阵运算学习技巧:用NumPy实际操作矩阵运算2.概率与统计(模型评估的关键)核心概念:条件概率与贝叶斯定理概率分布(正态、泊松、伯努利)假设检验与p值应用场景:朴素贝叶斯、A/B测试3.微积分(优化算法的基础)重点掌握:导数与偏导</div>
                    </li>
                    <li><a href="/article/1950195876991397888.htm"
                           title="【Jupyter】个人开发常见命令" target="_blank">【Jupyter】个人开发常见命令</a>
                        <span class="text-muted">TIM老师</span>
<a class="tag" taget="_blank" href="/search/%23/1.htm">#</a><a class="tag" taget="_blank" href="/search/Pycharm/1.htm">Pycharm</a><a class="tag" taget="_blank" href="/search/%26amp%3B/1.htm">&</a><a class="tag" taget="_blank" href="/search/VSCode/1.htm">VSCode</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/Jupyter/1.htm">Jupyter</a>
                        <div>1.查看python版本importsysprint(sys.version)2.ipynb/py文件转换jupyternbconvert--topythonmy_file.ipynbipynb转换为mdjupyternbconvert--tomdmy_file.ipynbipynb转为htmljupyternbconvert--tohtmlmy_file.ipynbipython转换为pdfju</div>
                    </li>
                    <li><a href="/article/1950194363237724160.htm"
                           title="用 Python 开发小游戏:零基础也能做出《贪吃蛇》" target="_blank">用 Python 开发小游戏:零基础也能做出《贪吃蛇》</a>
                        <span class="text-muted"></span>

                        <div>本文专为零基础学习者打造,详细介绍如何用Python开发经典小游戏《贪吃蛇》。无需复杂编程知识,从环境搭建到代码编写、功能实现,逐步讲解核心逻辑与操作。涵盖Pygame库的基础运用、游戏界面设计、蛇的移动与食物生成规则等,让新手能按步骤完成开发,同时融入SEO优化要点,帮助读者轻松入门Python游戏开发,体验从0到1做出游戏的乐趣。一、为什么选择用Python开发《贪吃蛇》对于零基础学习者来说,</div>
                    </li>
                    <li><a href="/article/1950193733681082368.htm"
                           title="基于Python的AI健康助手:开发与部署全攻略" target="_blank">基于Python的AI健康助手:开发与部署全攻略</a>
                        <span class="text-muted">AI算力网络与通信</span>
<a class="tag" taget="_blank" href="/search/AI%E7%AE%97%E5%8A%9B%E7%BD%91%E7%BB%9C%E4%B8%8E%E9%80%9A%E4%BF%A1%E5%8E%9F%E7%90%86/1.htm">AI算力网络与通信原理</a><a class="tag" taget="_blank" href="/search/AI%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%9E%B6%E6%9E%84/1.htm">AI人工智能大数据架构</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/1.htm">人工智能</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/ai/1.htm">ai</a>
                        <div>基于Python的AI健康助手:开发与部署全攻略关键词:Python、AI健康助手、机器学习、自然语言处理、Flask、部署、健康管理摘要:本文将详细介绍如何使用Python开发一个AI健康助手,从需求分析、技术选型到核心功能实现,再到最终部署上线的完整过程。我们将使用自然语言处理技术理解用户健康咨询,通过机器学习模型提供个性化建议,并展示如何用Flask框架构建Web应用接口。文章包含大量实际代</div>
                    </li>
                    <li><a href="/article/1950192849786040320.htm"
                           title="AI人工智能中的数据挖掘:提升智能决策能力" target="_blank">AI人工智能中的数据挖掘:提升智能决策能力</a>
                        <span class="text-muted"></span>

                        <div>AI人工智能中的数据挖掘:提升智能决策能力关键词:数据挖掘、人工智能、机器学习、智能决策、数据分析、特征工程、模型优化摘要:本文深入探讨了数据挖掘在人工智能领域中的核心作用,重点分析了如何通过数据挖掘技术提升智能决策能力。文章从基础概念出发,详细介绍了数据挖掘的关键算法、数学模型和实际应用场景,并通过Python代码示例展示了数据挖掘的全流程。最后,文章展望了数据挖掘技术的未来发展趋势和面临的挑战</div>
                    </li>
                    <li><a href="/article/1950192217708621824.htm"
                           title="lesson20:Python函数的标注" target="_blank">lesson20:Python函数的标注</a>
                        <span class="text-muted">你的电影很有趣</span>
<a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a>
                        <div>目录引言:为什么函数标注是现代Python开发的必备技能一、函数标注的基础语法1.1参数与返回值标注1.2支持的标注类型1.3Python3.9+的重大改进:标准集合泛型二、高级标注技巧与最佳实践2.1复杂参数结构标注2.2函数类型与回调标注2.3变量注解与类型别名三、静态类型检查工具应用3.1mypy:最流行的类型检查器3.2Pyright与IDE集成3.3运行时类型验证四、函数标注的工程价值与</div>
                    </li>
                    <li><a href="/article/1950190325960077312.htm"
                           title="Jupyter Notebook:数据科学的“瑞士军刀”" target="_blank">Jupyter Notebook:数据科学的“瑞士军刀”</a>
                        <span class="text-muted">a小胡哦</span>
<a class="tag" taget="_blank" href="/search/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%9F%BA%E7%A1%80/1.htm">机器学习基础</a><a class="tag" taget="_blank" href="/search/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/1.htm">人工智能</a><a class="tag" taget="_blank" href="/search/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/1.htm">机器学习</a>
                        <div>在数据科学的世界里,JupyterNotebook是一个不可或缺的工具,它就像是数据科学家手中的“瑞士军刀”,功能强大且灵活多变。今天,就让我们一起深入了解这个神奇的工具。一、JupyterNotebook是什么?JupyterNotebook是一个开源的Web应用程序,它允许你创建和共享包含实时代码、方程、可视化和解释性文本的文档。它支持多种编程语言,其中Python是最常用的语言之一。Jupy</div>
                    </li>
                    <li><a href="/article/1950187554129113088.htm"
                           title="Django学习笔记(一)" target="_blank">Django学习笔记(一)</a>
                        <span class="text-muted"></span>

                        <div>学习视频为:pythondjangoweb框架开发入门全套视频教程一、安装pipinstalldjango==****检查是否安装成功django.get_version()二、django新建项目操作1、新建一个项目django-adminstartprojectproject_name2、新建APPcdproject_namedjango-adminstartappApp注:一个project</div>
                    </li>
                    <li><a href="/article/1950185789447008256.htm"
                           title="Python 程序设计讲义(26):字符串的用法——字符的编码" target="_blank">Python 程序设计讲义(26):字符串的用法——字符的编码</a>
                        <span class="text-muted">睿思达DBA_WGX</span>
<a class="tag" taget="_blank" href="/search/Python/1.htm">Python</a><a class="tag" taget="_blank" href="/search/%E8%AE%B2%E4%B9%89/1.htm">讲义</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a>
                        <div>Python程序设计讲义(26):字符串的用法——字符的编码目录Python程序设计讲义(26):字符串的用法——字符的编码一、字符的编码二、`ASCII`编码三、`Unicode`编码四、使用`ord()`函数查询一个字符对应的`Unicode`编码五、使用`chr()`函数查询一个`Unicode`编码对应的字符六、`Python`字符串的特征一、字符的编码计算机默认只能处理二进制数,而不能处</div>
                    </li>
                    <li><a href="/article/1950183898780594176.htm"
                           title="【Python】pypinyin-汉字拼音转换工具" target="_blank">【Python】pypinyin-汉字拼音转换工具</a>
                        <span class="text-muted">鸟哥大大</span>
<a class="tag" taget="_blank" href="/search/Python/1.htm">Python</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E8%87%AA%E7%84%B6%E8%AF%AD%E8%A8%80%E5%A4%84%E7%90%86/1.htm">自然语言处理</a>
                        <div>文章目录1.主要功能2.安装3.常用API3.1拼音风格3.2核心API3.2.1pypinyin.pinyin()3.2.2pypinyin.lazy_pinyin()3.2.3pypinyin.load_single_dict()3.2.4pypinyin.load_phrases_dict()3.2.5pypinyin.slug()3.3注册新的拼音风格4.基本用法4.1库导入4.2基本汉字</div>
                    </li>
                    <li><a href="/article/1950183268448006144.htm"
                           title="python编程第十四课:数据可视化" target="_blank">python编程第十四课:数据可视化</a>
                        <span class="text-muted">小小源助手</span>
<a class="tag" taget="_blank" href="/search/Python%E4%BB%A3%E7%A0%81%E5%AE%9E%E4%BE%8B/1.htm">Python代码实例</a><a class="tag" taget="_blank" href="/search/%E4%BF%A1%E6%81%AF%E5%8F%AF%E8%A7%86%E5%8C%96/1.htm">信息可视化</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a>
                        <div>Python数据可视化:让数据“开口说话”在当今数据爆炸的时代,数据可视化已成为探索数据规律、传达数据信息的关键技术。Python凭借其丰富的第三方库,为数据可视化提供了强大而灵活的解决方案。本文将带你深入了解Matplotlib库的基础绘图、Seaborn库的高级可视化以及交互式可视化工具Plotly,帮助你通过图表清晰地展示数据背后的故事。一、Matplotlib库基础绘图Matplotlib</div>
                    </li>
                    <li><a href="/article/1950180118999658496.htm"
                           title="Python数据可视化:用代码绘制数据背后的故事" target="_blank">Python数据可视化:用代码绘制数据背后的故事</a>
                        <span class="text-muted">AAEllisonPang</span>
<a class="tag" taget="_blank" href="/search/Python/1.htm">Python</a><a class="tag" taget="_blank" href="/search/%E4%BF%A1%E6%81%AF%E5%8F%AF%E8%A7%86%E5%8C%96/1.htm">信息可视化</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a>
                        <div>引言:当数据会说话在数据爆炸的时代,可视化是解锁数据价值的金钥匙。Python凭借其丰富的可视化生态库,已成为数据科学家的首选工具。本文将带您从基础到高级,探索如何用Python将冰冷数字转化为引人入胜的视觉叙事。一、基础篇:二维可视化的艺术表达1.1Matplotlib:可视化领域的瑞士军刀importmatplotlib.pyplotaspltimportnumpyasnpx=np.linsp</div>
                    </li>
                    <li><a href="/article/1950179614320029696.htm"
                           title="python学习笔记(汇总)" target="_blank">python学习笔记(汇总)</a>
                        <span class="text-muted">朕的剑还未配妥</span>
<a class="tag" taget="_blank" href="/search/python%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0%E6%95%B4%E7%90%86/1.htm">python学习笔记整理</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%AD%A6%E4%B9%A0/1.htm">学习</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a>
                        <div>文章目录一.基础知识二.python中的数据类型三.运算符四.程序的控制结构五.列表六.字典七.元组八.集合九.字符串十.函数十一.解决bug一.基础知识print函数字符串要加引号,数字可不加引号,如print(123.4)print('小谢')print("洛天依")还可输入表达式,如print(1+3)如果使用三引号,print打印的内容可不在同一行print("line1line2line</div>
                    </li>
                    <li><a href="/article/1950175452580605952.htm"
                           title="Gerapy爬虫管理框架深度解析:企业级分布式爬虫管控平台" target="_blank">Gerapy爬虫管理框架深度解析:企业级分布式爬虫管控平台</a>
                        <span class="text-muted">Python×CATIA工业智造</span>
<a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a><a class="tag" taget="_blank" href="/search/%E5%88%86%E5%B8%83%E5%BC%8F/1.htm">分布式</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/pycharm/1.htm">pycharm</a>
                        <div>引言:爬虫工程化的必然选择随着企业数据采集需求指数级增长,传统单点爬虫管理模式面临三重困境:管理效率瓶颈:手动部署耗时占开发总时长的40%以上系统可靠性低:研究显示超过65%的爬虫故障源于部署或调度错误资源利用率差:平均爬虫服务器CPU利用率不足30%爬虫管理方案对比:┌───────────────┬─────────────┬───────────┬───────────┬──────────</div>
                    </li>
                    <li><a href="/article/1950175199089455104.htm"
                           title="PDF转Markdown - Python 实现方案与代码" target="_blank">PDF转Markdown - Python 实现方案与代码</a>
                        <span class="text-muted">Eiceblue</span>
<a class="tag" taget="_blank" href="/search/Python/1.htm">Python</a><a class="tag" taget="_blank" href="/search/Python/1.htm">Python</a><a class="tag" taget="_blank" href="/search/PDF/1.htm">PDF</a><a class="tag" taget="_blank" href="/search/pdf/1.htm">pdf</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/vscode/1.htm">vscode</a>
                        <div>PDF作为广泛使用的文档格式,转换为轻量级标记语言Markdown后,可无缝集成到技术文档、博客平台和版本控制系统中,提高内容的可编辑性和可访问性。本文将详细介绍如何使用国产Spire.PDFforPython库将PDF文档转换为Markdown格式。技术优势:精准保留原始文档结构(段落/列表/表格)完整提取文本和图像内容无需Adobe依赖的纯Python实现支持Linux/Windows/mac</div>
                    </li>
                    <li><a href="/article/1950174441992417280.htm"
                           title="使用Python和Gradio构建实时数据可视化工具" target="_blank">使用Python和Gradio构建实时数据可视化工具</a>
                        <span class="text-muted">PythonAI编程架构实战家</span>
<a class="tag" taget="_blank" href="/search/%E4%BF%A1%E6%81%AF%E5%8F%AF%E8%A7%86%E5%8C%96/1.htm">信息可视化</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/ai/1.htm">ai</a>
                        <div>使用Python和Gradio构建实时数据可视化工具关键词:Python、Gradio、数据可视化、实时数据、Web应用、交互式界面、数据科学摘要:本文将详细介绍如何使用Python和Gradio框架构建一个实时数据可视化工具。我们将从基础概念开始,逐步深入到核心算法实现,包括数据处理、可视化技术以及Gradio的交互式界面设计。通过实际项目案例,读者将学习如何创建一个功能完整、响应迅速的实时数据</div>
                    </li>
                    <li><a href="/article/1950174315609649152.htm"
                           title="Python Gradio:实现交互式图像编辑" target="_blank">Python Gradio:实现交互式图像编辑</a>
                        <span class="text-muted">PythonAI编程架构实战家</span>
<a class="tag" taget="_blank" href="/search/Python%E7%BC%96%E7%A8%8B%E4%B9%8B%E9%81%93/1.htm">Python编程之道</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/ai/1.htm">ai</a>
                        <div>PythonGradio:实现交互式图像编辑关键词:Python,Gradio,交互式图像编辑,计算机视觉,深度学习,图像处理,Web应用摘要:本文将深入探讨如何使用Python的Gradio库构建交互式图像编辑应用。我们将从基础概念开始,逐步介绍Gradio的核心功能,并通过实际代码示例展示如何实现各种图像处理功能。文章将涵盖图像滤镜应用、对象检测、风格迁移等高级功能,同时提供完整的项目实战案例</div>
                    </li>
                    <li><a href="/article/1950174063116742656.htm"
                           title="数据可视化:数据世界的直观呈现" target="_blank">数据可视化:数据世界的直观呈现</a>
                        <span class="text-muted">卢政权1</span>
<a class="tag" taget="_blank" href="/search/%E4%BF%A1%E6%81%AF%E5%8F%AF%E8%A7%86%E5%8C%96/1.htm">信息可视化</a><a class="tag" taget="_blank" href="/search/%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90/1.htm">数据分析</a><a class="tag" taget="_blank" href="/search/%E6%95%B0%E6%8D%AE%E6%8C%96%E6%8E%98/1.htm">数据挖掘</a>
                        <div>在当今数字化浪潮中,数据呈爆炸式增长。数据可视化作为一种强大的技术手段,能够将复杂的数据转化为直观的图形、图表等形式,让数据背后的信息一目了然。无论是在商业决策、科学研究还是日常数据分析中,数据可视化都发挥着极为重要的作用。它帮助我们快速理解数据的分布、趋势、关联等特征,从而为进一步的分析和行动提供有力支持。接下来,我们将深入探讨数据可视化的奥秘,并通过代码示例展示其实际应用。一、Python数据</div>
                    </li>
                    <li><a href="/article/1950172300749893632.htm"
                           title="Python 程序设计讲义(25):循环结构——嵌套循环" target="_blank">Python 程序设计讲义(25):循环结构——嵌套循环</a>
                        <span class="text-muted"></span>

                        <div>Python程序设计讲义(25):循环结构——嵌套循环目录Python程序设计讲义(25):循环结构——嵌套循环一、嵌套循环的执行流程二、嵌套循环对应的几种情况1、内循环和外循环互不影响2、外循环迭代影响内循环的条件3、外循环迭代影响内循环的循环体嵌套循环是指在一个循环体中嵌套另一个循环。while循环中可以嵌入另一个while循环或for循环。反之,也可以在for循环中嵌入另一个for循环或wh</div>
                    </li>
                    <li><a href="/article/1950166498563649536.htm"
                           title="基于Python引擎的PP-OCR模型库推理" target="_blank">基于Python引擎的PP-OCR模型库推理</a>
                        <span class="text-muted">张欣-男</span>
<a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/ocr/1.htm">ocr</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E5%8F%91%E8%AF%AD%E8%A8%80/1.htm">开发语言</a><a class="tag" taget="_blank" href="/search/PaddleOCR/1.htm">PaddleOCR</a><a class="tag" taget="_blank" href="/search/PaddlePaddle/1.htm">PaddlePaddle</a>
                        <div>基于Python引擎的PP-OCR模型库推理1.文本检测模型推理#下载超轻量中文检测模型:wgethttps://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tartarxfch_PP-OCRv3_det_infer.tarpython3tools/infer/predict_det.py--image_dir=".</div>
                    </li>
                    <li><a href="/article/1950158807220940800.htm"
                           title="一个开源AI牛马神器 | AiPy,平替Manus,装完直接上手写Python!" target="_blank">一个开源AI牛马神器 | AiPy,平替Manus,装完直接上手写Python!</a>
                        <span class="text-muted">Agent加载失败</span>
<a class="tag" taget="_blank" href="/search/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/1.htm">人工智能</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E5%BC%80%E6%BA%90/1.htm">开源</a><a class="tag" taget="_blank" href="/search/%E7%AE%97%E6%B3%95/1.htm">算法</a><a class="tag" taget="_blank" href="/search/AI%E7%BC%96%E7%A8%8B/1.htm">AI编程</a>
                        <div>还记得三个月前那个在闲鱼被炒到万元邀请码的Manus吗?现在你点官网,直接提示「所在地区不可用」了它走了,但更香的国产开源项目出现了:AiPy(爱派)。主打一个极致简化的AIAgent理念:别搞什么插件市场、Agent路由,直接给AI一个Python解释器,让它用自然语言写代码干活。听起来狠活?实际体验更狠:•完全本地化,界面傻瓜式操作,支持自然语言生成&执行Python任务;•数据清洗、文档总结</div>
                    </li>
                    <li><a href="/article/1950158303287898112.htm"
                           title="零数学基础理解AI核心概念:梯度下降可视化实战" target="_blank">零数学基础理解AI核心概念:梯度下降可视化实战</a>
                        <span class="text-muted">九章云极AladdinEdu</span>
<a class="tag" taget="_blank" href="/search/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD/1.htm">人工智能</a><a class="tag" taget="_blank" href="/search/gpu%E7%AE%97%E5%8A%9B/1.htm">gpu算力</a><a class="tag" taget="_blank" href="/search/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/1.htm">深度学习</a><a class="tag" taget="_blank" href="/search/pytorch/1.htm">pytorch</a><a class="tag" taget="_blank" href="/search/python/1.htm">python</a><a class="tag" taget="_blank" href="/search/%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B/1.htm">语言模型</a><a class="tag" taget="_blank" href="/search/opencv/1.htm">opencv</a>
                        <div>点击“AladdinEdu,同学们用得起的【H卡】算力平台”,H卡级别算力,按量计费,灵活弹性,顶级配置,学生专属优惠。用Python动画演示损失函数优化过程,数学公式具象化读者收获:直观理解模型训练本质,破除"数学恐惧症"当盲人登山者摸索下山路径时,他本能地运用了梯度下降算法。本文将用动态可视化技术,让你像感受重力一样理解AI训练的核心原理——无需任何数学公式推导。一、梯度下降:AI世界的"万有</div>
                    </li>
                    <li><a href="/article/1950141538352820224.htm"
                           title="2025.07 Java入门笔记01" target="_blank">2025.07 Java入门笔记01</a>
                        <span class="text-muted">殷浩焕</span>
<a class="tag" taget="_blank" href="/search/%E7%AC%94%E8%AE%B0/1.htm">笔记</a>
                        <div>一、熟悉IDEA和Java语法(一)LiuCourseJavaOOP1.一直在用C++开发,python也用了些,Java是真的不熟,用什么IDE还是问的同事;2.一开始安装了jdk-23,拿VSCode当编辑器,在cmd窗口编译运行,也能玩;但是想正儿八经搞项目开发,还是需要IDE;3.安装了IDEA社区版:(1)IDE通常自带对应编程语言的安装包,例如IDEA自带jbr-21(和jdk是不同的</div>
                    </li>
                                <li><a href="/article/23.htm"
                                       title="HttpClient 4.3与4.3版本以下版本比较" target="_blank">HttpClient 4.3与4.3版本以下版本比较</a>
                                    <span class="text-muted">spjich</span>
<a class="tag" taget="_blank" href="/search/java/1.htm">java</a><a class="tag" taget="_blank" href="/search/httpclient/1.htm">httpclient</a>
                                    <div>网上利用java发送http请求的代码很多,一搜一大把,有的利用的是java.net.*下的HttpURLConnection,有的用httpclient,而且发送的代码也分门别类。今天我们主要来说的是利用httpclient发送请求。 
httpclient又可分为 
 
 httpclient3.x 
 httpclient4.x到httpclient4.3以下 
 httpclient4.3</div>
                                </li>
                                <li><a href="/article/150.htm"
                                       title="Essential Studio Enterprise Edition 2015 v1新功能体验" target="_blank">Essential Studio Enterprise Edition 2015 v1新功能体验</a>
                                    <span class="text-muted">Axiba</span>
<a class="tag" taget="_blank" href="/search/.net/1.htm">.net</a>
                                    <div>概述:Essential Studio已全线升级至2015 v1版本了!新版本为JavaScript和ASP.NET MVC添加了新的文件资源管理器控件,还有其他一些控件功能升级,精彩不容错过,让我们一起来看看吧! 
syncfusion公司是世界领先的Windows开发组件提供商,该公司正式对外发布Essential Studio Enterprise Edition 2015 v1版本。新版本</div>
                                </li>
                                <li><a href="/article/277.htm"
                                       title="[宇宙与天文]微波背景辐射值与地球温度" target="_blank">[宇宙与天文]微波背景辐射值与地球温度</a>
                                    <span class="text-muted">comsci</span>
<a class="tag" taget="_blank" href="/search/%E8%83%8C%E6%99%AF/1.htm">背景</a>
                                    <div> 
 
 
        宇宙这个庞大,无边无际的空间是否存在某种确定的,变化的温度呢? 
 
     如果宇宙微波背景辐射值是表示宇宙空间温度的参数之一,那么测量这些数值,并观测周围的恒星能量输出值,我们是否获得地球的长期气候变化的情况呢? 
 
 
  &nbs</div>
                                </li>
                                <li><a href="/article/404.htm"
                                       title="lvs-server" target="_blank">lvs-server</a>
                                    <span class="text-muted">男人50</span>
<a class="tag" taget="_blank" href="/search/server/1.htm">server</a>
                                    <div>#!/bin/bash 
# 
# LVS script for VS/DR 
# 
#./etc/rc.d/init.d/functions 
# 
VIP=10.10.6.252 
RIP1=10.10.6.101 
RIP2=10.10.6.13 
PORT=80 
case $1 in 
start) 
 
  /sbin/ifconfig eth2:0 $VIP broadca</div>
                                </li>
                                <li><a href="/article/531.htm"
                                       title="java的WebCollector爬虫框架" target="_blank">java的WebCollector爬虫框架</a>
                                    <span class="text-muted">oloz</span>
<a class="tag" taget="_blank" href="/search/%E7%88%AC%E8%99%AB/1.htm">爬虫</a>
                                    <div>WebCollector主页: 
https://github.com/CrawlScript/WebCollector 
 
下载:webcollector-版本号-bin.zip将解压后文件夹中的所有jar包添加到工程既可。 
 
接下来看demo 
package org.spider.myspider;

import cn.edu.hfut.dmic.webcollector.cra</div>
                                </li>
                                <li><a href="/article/658.htm"
                                       title="jQuery append 与 after 的区别" target="_blank">jQuery append 与 after 的区别</a>
                                    <span class="text-muted">小猪猪08</span>

                                    <div>1、after函数 
定义和用法: 
after() 方法在被选元素后插入指定的内容。 
语法: 
$(selector).after(content) 
实例: 
<html> 
<head> 
<script type="text/javascript" src="/jquery/jquery.js"></scr</div>
                                </li>
                                <li><a href="/article/785.htm"
                                       title="mysql知识充电" target="_blank">mysql知识充电</a>
                                    <span class="text-muted">香水浓</span>
<a class="tag" taget="_blank" href="/search/mysql/1.htm">mysql</a>
                                    <div>索引  
索引是在存储引擎中实现的,因此每种存储引擎的索引都不一定完全相同,并且每种存储引擎也不一定支持所有索引类型。 
 
根据存储引擎定义每个表的最大索引数和最大索引长度。所有存储引擎支持每个表至少16个索引,总索引长度至少为256字节。 
 
大多数存储引擎有更高的限制。MYSQL中索引的存储类型有两种:BTREE和HASH,具体和表的存储引擎相关; 
 
MYISAM和InnoDB存储引擎</div>
                                </li>
                                <li><a href="/article/912.htm"
                                       title="我的架构经验系列文章索引" target="_blank">我的架构经验系列文章索引</a>
                                    <span class="text-muted">agevs</span>
<a class="tag" taget="_blank" href="/search/%E6%9E%B6%E6%9E%84/1.htm">架构</a>
                                    <div>下面是一些个人架构上的总结,本来想只在公司内部进行共享的,因此内容写的口语化一点,也没什么图示,所有内容没有查任何资料是脑子里面的东西吐出来的因此可能会不准确不全,希望抛砖引玉,大家互相讨论。 
要注意,我这些文章是一个总体的架构经验不针对具体的语言和平台,因此也不一定是适用所有的语言和平台的。 
(内容是前几天写的,现附上索引) 
  
 
 前端架构 http://www.</div>
                                </li>
                                <li><a href="/article/1039.htm"
                                       title="Android so lib库远程http下载和动态注册" target="_blank">Android so lib库远程http下载和动态注册</a>
                                    <span class="text-muted">aijuans</span>
<a class="tag" taget="_blank" href="/search/andorid/1.htm">andorid</a>
                                    <div>一、背景 
  
   在开发Android应用程序的实现,有时候需要引入第三方so lib库,但第三方so库比较大,例如开源第三方播放组件ffmpeg库, 如果直接打包的apk包里面, 整个应用程序会大很多.经过查阅资料和实验,发现通过远程下载so文件,然后再动态注册so文件时可行的。主要需要解决下载so文件存放位置以及文件读写权限问题。 
  
二、主要</div>
                                </li>
                                <li><a href="/article/1166.htm"
                                       title="linux中svn配置出错 conf/svnserve.conf:12: Option expected 解决方法" target="_blank">linux中svn配置出错 conf/svnserve.conf:12: Option expected 解决方法</a>
                                    <span class="text-muted">baalwolf</span>
<a class="tag" taget="_blank" href="/search/option/1.htm">option</a>
                                    <div>在客户端访问subversion版本库时出现这个错误: 
svnserve.conf:12: Option expected 
为什么会出现这个错误呢,就是因为subversion读取配置文件svnserve.conf时,无法识别有前置空格的配置文件,如### This file controls the configuration of the svnserve daemon, if you##</div>
                                </li>
                                <li><a href="/article/1293.htm"
                                       title="MongoDB的连接池和连接管理" target="_blank">MongoDB的连接池和连接管理</a>
                                    <span class="text-muted">BigCat2013</span>
<a class="tag" taget="_blank" href="/search/mongodb/1.htm">mongodb</a>
                                    <div>在关系型数据库中,我们总是需要关闭使用的数据库连接,不然大量的创建连接会导致资源的浪费甚至于数据库宕机。这篇文章主要想解释一下mongoDB的连接池以及连接管理机制,如果正对此有疑惑的朋友可以看一下。 
通常我们习惯于new 一个connection并且通常在finally语句中调用connection的close()方法将其关闭。正巧,mongoDB中当我们new一个Mongo的时候,会发现它也</div>
                                </li>
                                <li><a href="/article/1420.htm"
                                       title="AngularJS使用Socket.IO" target="_blank">AngularJS使用Socket.IO</a>
                                    <span class="text-muted">bijian1013</span>
<a class="tag" taget="_blank" href="/search/JavaScript/1.htm">JavaScript</a><a class="tag" taget="_blank" href="/search/AngularJS/1.htm">AngularJS</a><a class="tag" taget="_blank" href="/search/Socket.IO/1.htm">Socket.IO</a>
                                    <div>        目前,web应用普遍被要求是实时web应用,即服务端的数据更新之后,应用能立即更新。以前使用的技术(例如polling)存在一些局限性,而且有时我们需要在客户端打开一个socket,然后进行通信。 
        Socket.IO(http://socket.io/)是一个非常优秀的库,它可以帮你实</div>
                                </li>
                                <li><a href="/article/1547.htm"
                                       title="[Maven学习笔记四]Maven依赖特性" target="_blank">[Maven学习笔记四]Maven依赖特性</a>
                                    <span class="text-muted">bit1129</span>
<a class="tag" taget="_blank" href="/search/maven/1.htm">maven</a>
                                    <div>三个模块 
为了说明问题,以用户登陆小web应用为例。通常一个web应用分为三个模块,模型和数据持久化层user-core, 业务逻辑层user-service以及web展现层user-web, 
user-service依赖于user-core 
user-web依赖于user-core和user-service 
  
依赖作用范围 
 Maven的dependency定义</div>
                                </li>
                                <li><a href="/article/1674.htm"
                                       title="【Akka一】Akka入门" target="_blank">【Akka一】Akka入门</a>
                                    <span class="text-muted">bit1129</span>
<a class="tag" taget="_blank" href="/search/akka/1.htm">akka</a>
                                    <div>什么是Akka 
Message-Driven Runtime is the Foundation to Reactive Applications 
In Akka, your business logic is driven through message-based communication patterns that are independent of physical locatio</div>
                                </li>
                                <li><a href="/article/1801.htm"
                                       title="zabbix_api之perl语言写法" target="_blank">zabbix_api之perl语言写法</a>
                                    <span class="text-muted">ronin47</span>
<a class="tag" taget="_blank" href="/search/zabbix_api%E4%B9%8Bperl/1.htm">zabbix_api之perl</a>
                                    <div>zabbix_api网上比较多的写法是python或curl。上次我用java--http://bossr.iteye.com/blog/2195679,这次用perl。for example:   #!/usr/bin/perl 
 
 use 5.010 ; 
 use strict ; 
 use warnings ; 
 use JSON :: RPC :: Client ; 
 use </div>
                                </li>
                                <li><a href="/article/1928.htm"
                                       title="比优衣库跟牛掰的视频流出了,兄弟连Linux运维工程师课堂实录,更加刺激,更加实在!" target="_blank">比优衣库跟牛掰的视频流出了,兄弟连Linux运维工程师课堂实录,更加刺激,更加实在!</a>
                                    <span class="text-muted">brotherlamp</span>
<a class="tag" taget="_blank" href="/search/linux%E8%BF%90%E7%BB%B4%E5%B7%A5%E7%A8%8B%E5%B8%88/1.htm">linux运维工程师</a><a class="tag" taget="_blank" href="/search/linux%E8%BF%90%E7%BB%B4%E5%B7%A5%E7%A8%8B%E5%B8%88%E6%95%99%E7%A8%8B/1.htm">linux运维工程师教程</a><a class="tag" taget="_blank" href="/search/linux%E8%BF%90%E7%BB%B4%E5%B7%A5%E7%A8%8B%E5%B8%88%E8%A7%86%E9%A2%91/1.htm">linux运维工程师视频</a><a class="tag" taget="_blank" href="/search/linux%E8%BF%90%E7%BB%B4%E5%B7%A5%E7%A8%8B%E5%B8%88%E8%B5%84%E6%96%99/1.htm">linux运维工程师资料</a><a class="tag" taget="_blank" href="/search/linux%E8%BF%90%E7%BB%B4%E5%B7%A5%E7%A8%8B%E5%B8%88%E8%87%AA%E5%AD%A6/1.htm">linux运维工程师自学</a>
                                    <div>比优衣库跟牛掰的视频流出了,兄弟连Linux运维工程师课堂实录,更加刺激,更加实在! 
  
----------------------------------------------------- 
兄弟连Linux运维工程师课堂实录-计算机基础-1-课程体系介绍1 
链接:http://pan.baidu.com/s/1i3GQtGL 密码:bl65 
  
兄弟连Lin</div>
                                </li>
                                <li><a href="/article/2055.htm"
                                       title="bitmap求哈密顿距离-给定N(1<=N<=100000)个五维的点A(x1,x2,x3,x4,x5),求两个点X(x1,x2,x3,x4,x5)和Y(" target="_blank">bitmap求哈密顿距离-给定N(1<=N<=100000)个五维的点A(x1,x2,x3,x4,x5),求两个点X(x1,x2,x3,x4,x5)和Y(</a>
                                    <span class="text-muted">bylijinnan</span>
<a class="tag" taget="_blank" href="/search/java/1.htm">java</a>
                                    <div>
import java.util.Random;

/**
 * 题目:
 * 给定N(1<=N<=100000)个五维的点A(x1,x2,x3,x4,x5),求两个点X(x1,x2,x3,x4,x5)和Y(y1,y2,y3,y4,y5),
 * 使得他们的哈密顿距离(d=|x1-y1| + |x2-y2| + |x3-y3| + |x4-y4| + |x5-y5|)最大</div>
                                </li>
                                <li><a href="/article/2182.htm"
                                       title="map的三种遍历方法" target="_blank">map的三种遍历方法</a>
                                    <span class="text-muted">chicony</span>
<a class="tag" taget="_blank" href="/search/map/1.htm">map</a>
                                    <div>  
package com.test;

import java.util.Collection;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;

public class TestMap {
    public static v</div>
                                </li>
                                <li><a href="/article/2309.htm"
                                       title="Linux安装mysql的一些坑" target="_blank">Linux安装mysql的一些坑</a>
                                    <span class="text-muted">chenchao051</span>
<a class="tag" taget="_blank" href="/search/linux/1.htm">linux</a>
                                    <div>1、mysql不建议在root用户下运行 
  
2、出现服务启动不了,111错误,注意要用chown来赋予权限, 我在root用户下装的mysql,我就把usr/share/mysql/mysql.server复制到/etc/init.d/mysqld, (同时把my-huge.cnf复制/etc/my.cnf)  
chown -R cc /etc/init.d/mysql</div>
                                </li>
                                <li><a href="/article/2436.htm"
                                       title="Sublime Text 3 配置" target="_blank">Sublime Text 3 配置</a>
                                    <span class="text-muted">daizj</span>
<a class="tag" taget="_blank" href="/search/%E9%85%8D%E7%BD%AE/1.htm">配置</a><a class="tag" taget="_blank" href="/search/Sublime+Text/1.htm">Sublime Text</a>
                                    <div>Sublime Text 3 配置解释(默认){// 设置主题文件“color_scheme”: “Packages/Color Scheme – Default/Monokai.tmTheme”,// 设置字体和大小“font_face”: “Consolas”,“font_size”: 12,// 字体选项:no_bold不显示粗体字,no_italic不显示斜体字,no_antialias和</div>
                                </li>
                                <li><a href="/article/2563.htm"
                                       title="MySQL server has gone away 问题的解决方法" target="_blank">MySQL server has gone away 问题的解决方法</a>
                                    <span class="text-muted">dcj3sjt126com</span>
<a class="tag" taget="_blank" href="/search/SQL+Server/1.htm">SQL Server</a>
                                    <div>MySQL server has gone away 问题解决方法,需要的朋友可以参考下。 
应用程序(比如PHP)长时间的执行批量的MYSQL语句。执行一个SQL,但SQL语句过大或者语句中含有BLOB或者longblob字段。比如,图片数据的处理。都容易引起MySQL server has gone away。 今天遇到类似的情景,MySQL只是冷冷的说:MySQL server h</div>
                                </li>
                                <li><a href="/article/2690.htm"
                                       title="javascript/dom:固定居中效果" target="_blank">javascript/dom:固定居中效果</a>
                                    <span class="text-muted">dcj3sjt126com</span>
<a class="tag" taget="_blank" href="/search/JavaScript/1.htm">JavaScript</a>
                                    <div><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml&</div>
                                </li>
                                <li><a href="/article/2817.htm"
                                       title="使用 Spring 2.5 注释驱动的 IoC 功能" target="_blank">使用 Spring 2.5 注释驱动的 IoC 功能</a>
                                    <span class="text-muted">e200702084</span>
<a class="tag" taget="_blank" href="/search/spring/1.htm">spring</a><a class="tag" taget="_blank" href="/search/bean/1.htm">bean</a><a class="tag" taget="_blank" href="/search/%E9%85%8D%E7%BD%AE%E7%AE%A1%E7%90%86/1.htm">配置管理</a><a class="tag" taget="_blank" href="/search/IOC/1.htm">IOC</a><a class="tag" taget="_blank" href="/search/Office/1.htm">Office</a>
                                    <div>使用 Spring 2.5 注释驱动的 IoC 功能 
 developerWorks 
 
 
文档选项 
 将打印机的版面设置成横向打印模式 
 
打印本页 
 将此页作为电子邮件发送 
 
将此页作为电子邮件发送 
 
级别: 初级 
 
陈 雄华 (quickselect@163.com), 技术总监, 宝宝淘网络科技有限公司 
 
2008 年 2 月 28 日 
 
 &nb</div>
                                </li>
                                <li><a href="/article/2944.htm"
                                       title="MongoDB常用操作命令" target="_blank">MongoDB常用操作命令</a>
                                    <span class="text-muted">geeksun</span>
<a class="tag" taget="_blank" href="/search/mongodb/1.htm">mongodb</a>
                                    <div>1.   基本操作 
db.AddUser(username,password)               添加用户 
db.auth(usrename,password)      设置数据库连接验证 
db.cloneDataBase(fromhost)     </div>
                                </li>
                                <li><a href="/article/3071.htm"
                                       title="php写守护进程(Daemon)" target="_blank">php写守护进程(Daemon)</a>
                                    <span class="text-muted">hongtoushizi</span>
<a class="tag" taget="_blank" href="/search/PHP/1.htm">PHP</a>
                                    <div>转载自: http://blog.csdn.net/tengzhaorong/article/details/9764655 
  
守护进程(Daemon)是运行在后台的一种特殊进程。它独立于控制终端并且周期性地执行某种任务或等待处理某些发生的事件。守护进程是一种很有用的进程。php也可以实现守护进程的功能。 
  
1、基本概念 
  &nbs</div>
                                </li>
                                <li><a href="/article/3198.htm"
                                       title="spring整合mybatis,关于注入Dao对象出错问题" target="_blank">spring整合mybatis,关于注入Dao对象出错问题</a>
                                    <span class="text-muted">jonsvien</span>
<a class="tag" taget="_blank" href="/search/DAO/1.htm">DAO</a><a class="tag" taget="_blank" href="/search/spring/1.htm">spring</a><a class="tag" taget="_blank" href="/search/bean/1.htm">bean</a><a class="tag" taget="_blank" href="/search/mybatis/1.htm">mybatis</a><a class="tag" taget="_blank" href="/search/prototype/1.htm">prototype</a>
                                    <div>今天在公司测试功能时发现一问题: 
先进行代码说明: 
1,controller配置了Scope="prototype"(表明每一次请求都是原子型) 
   @resource/@autowired service对象都可以(两种注解都可以)。 
2,service 配置了Scope="prototype"(表明每一次请求都是原子型) 
</div>
                                </li>
                                <li><a href="/article/3325.htm"
                                       title="对象关系行为模式之标识映射" target="_blank">对象关系行为模式之标识映射</a>
                                    <span class="text-muted">home198979</span>
<a class="tag" taget="_blank" href="/search/PHP/1.htm">PHP</a><a class="tag" taget="_blank" href="/search/%E6%9E%B6%E6%9E%84/1.htm">架构</a><a class="tag" taget="_blank" href="/search/%E4%BC%81%E4%B8%9A%E5%BA%94%E7%94%A8/1.htm">企业应用</a><a class="tag" taget="_blank" href="/search/%E5%AF%B9%E8%B1%A1%E5%85%B3%E7%B3%BB/1.htm">对象关系</a><a class="tag" taget="_blank" href="/search/%E6%A0%87%E8%AF%86%E6%98%A0%E5%B0%84/1.htm">标识映射</a>
                                    <div>HELLO!架构 
  
一、概念 
identity Map:通过在映射中保存每个已经加载的对象,确保每个对象只加载一次,当要访问对象的时候,通过映射来查找它们。其实在数据源架构模式之数据映射器代码中有提及到标识映射,Mapper类的getFromMap方法就是实现标识映射的实现。 
  
  
二、为什么要使用标识映射? 
在数据源架构模式之数据映射器中 
//c</div>
                                </li>
                                <li><a href="/article/3452.htm"
                                       title="Linux下hosts文件详解" target="_blank">Linux下hosts文件详解</a>
                                    <span class="text-muted">pda158</span>
<a class="tag" taget="_blank" href="/search/linux/1.htm">linux</a>
                                    <div> 1、主机名:     无论在局域网还是INTERNET上,每台主机都有一个IP地址,是为了区分此台主机和彼台主机,也就是说IP地址就是主机的门牌号。     公网:IP地址不方便记忆,所以又有了域名。域名只是在公网(INtERNET)中存在,每个域名都对应一个IP地址,但一个IP地址可有对应多个域名。     局域网:每台机器都有一个主机名,用于主机与主机之间的便于区分,就可以为每台机器设置主机</div>
                                </li>
                                <li><a href="/article/3579.htm"
                                       title="nginx配置文件粗解" target="_blank">nginx配置文件粗解</a>
                                    <span class="text-muted">spjich</span>
<a class="tag" taget="_blank" href="/search/java/1.htm">java</a><a class="tag" taget="_blank" href="/search/nginx/1.htm">nginx</a>
                                    <div>#运行用户#user  nobody;#启动进程,通常设置成和cpu的数量相等worker_processes  2;#全局错误日志及PID文件#error_log  logs/error.log;#error_log  logs/error.log  notice;#error_log  logs/error.log  inf</div>
                                </li>
                                <li><a href="/article/3706.htm"
                                       title="数学函数" target="_blank">数学函数</a>
                                    <span class="text-muted">w54653520</span>
<a class="tag" taget="_blank" href="/search/java/1.htm">java</a>
                                    <div>public  
class  
S {       
     
// 传入两个整数,进行比较,返回两个数中的最大值的方法。   
     
public  
int  
get( 
int  
num1, 
int  
nu</div>
                                </li>
                </ul>
            </div>
        </div>
    </div>

<div>
    <div class="container">
        <div class="indexes">
            <strong>按字母分类:</strong>
            <a href="/tags/A/1.htm" target="_blank">A</a><a href="/tags/B/1.htm" target="_blank">B</a><a href="/tags/C/1.htm" target="_blank">C</a><a
                href="/tags/D/1.htm" target="_blank">D</a><a href="/tags/E/1.htm" target="_blank">E</a><a href="/tags/F/1.htm" target="_blank">F</a><a
                href="/tags/G/1.htm" target="_blank">G</a><a href="/tags/H/1.htm" target="_blank">H</a><a href="/tags/I/1.htm" target="_blank">I</a><a
                href="/tags/J/1.htm" target="_blank">J</a><a href="/tags/K/1.htm" target="_blank">K</a><a href="/tags/L/1.htm" target="_blank">L</a><a
                href="/tags/M/1.htm" target="_blank">M</a><a href="/tags/N/1.htm" target="_blank">N</a><a href="/tags/O/1.htm" target="_blank">O</a><a
                href="/tags/P/1.htm" target="_blank">P</a><a href="/tags/Q/1.htm" target="_blank">Q</a><a href="/tags/R/1.htm" target="_blank">R</a><a
                href="/tags/S/1.htm" target="_blank">S</a><a href="/tags/T/1.htm" target="_blank">T</a><a href="/tags/U/1.htm" target="_blank">U</a><a
                href="/tags/V/1.htm" target="_blank">V</a><a href="/tags/W/1.htm" target="_blank">W</a><a href="/tags/X/1.htm" target="_blank">X</a><a
                href="/tags/Y/1.htm" target="_blank">Y</a><a href="/tags/Z/1.htm" target="_blank">Z</a><a href="/tags/0/1.htm" target="_blank">其他</a>
        </div>
    </div>
</div>
<footer id="footer" class="mb30 mt30">
    <div class="container">
        <div class="footBglm">
            <a target="_blank" href="/">首页</a> -
            <a target="_blank" href="/custom/about.htm">关于我们</a> -
            <a target="_blank" href="/search/Java/1.htm">站内搜索</a> -
            <a target="_blank" href="/sitemap.txt">Sitemap</a> -
            <a target="_blank" href="/custom/delete.htm">侵权投诉</a>
        </div>
        <div class="copyright">版权所有 IT知识库 CopyRight © 2000-2050 E-COM-NET.COM , All Rights Reserved.
<!--            <a href="https://beian.miit.gov.cn/" rel="nofollow" target="_blank">京ICP备09083238号</a><br>-->
        </div>
    </div>
</footer>
<!-- 代码高亮 -->
<script type="text/javascript" src="/static/syntaxhighlighter/scripts/shCore.js"></script>
<script type="text/javascript" src="/static/syntaxhighlighter/scripts/shLegacy.js"></script>
<script type="text/javascript" src="/static/syntaxhighlighter/scripts/shAutoloader.js"></script>
<link type="text/css" rel="stylesheet" href="/static/syntaxhighlighter/styles/shCoreDefault.css"/>
<script type="text/javascript" src="/static/syntaxhighlighter/src/my_start_1.js"></script>





</body>

</html><script data-cfasync="false" src="/cdn-cgi/scripts/5c5dd728/cloudflare-static/email-decode.min.js"></script>