菜鸟Python实战-05爬虫之爬取视频

爬取(或者说下载)视频的三种方法
方法1:用requests.get
方法2:用urllib.request.urlretrieve
方法3:用you-get下载

方法1:用requests.get
1-安装requests库其其它代码需要的库
import requests
2-用requests.get(item.get(‘url’)) API进行下载

具体完整代码

# -*- coding: utf-8 -*-
"""
Created on Sat Aug  7 21:51:45 2021

@author: neo
菜鸟Python实战-05爬虫之爬取视频
参考内容1:https://blog.csdn.net/weixin_48923393/article/details/117377043?utm_medium=distribute.pc_relevant.none-task-blog-2~default~baidujs_title~default-0.control&spm=1001.2101.3001.4242
参考内容2:https://blog.csdn.net/cainiao_python/article/details/117049922?utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7Edefault-18.control&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7Edefault-18.control
"""


import requests
from bs4 import BeautifulSoup
import time
import random
import json
import re
from tqdm import tqdm#进度条

import urllib.request     # 制定URL,获取网页数据  #另外一种下载视频的方式(方法2)
 
url_file_name = '.\\Result_File\\url.txt'
 
def get_list():
    for p in range(1):  #抓取页数
        
        html = requests.get('https://v.huya.com/g/pet?set_id=43&order=hot&page={}'.format(p+1));  #每一页的网址
        soup = BeautifulSoup(html.text, 'html.parser')
        ul = soup.find('ul', class_='vhy-video-list w215 clearfix')  #截取页面视频部分内容
        l

你可能感兴趣的:(python,python,爬虫)