PyTorch 2.2 中文官方教程(七)

使用 torchtext 库进行文本分类

原文:pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html

译者:飞龙

协议:CC BY-NC-SA 4.0

注意

点击这里下载完整示例代码

在本教程中,我们将展示如何使用 torchtext 库构建文本分类分析的数据集。用户将有灵活性

  • 访问原始数据的迭代器
  • 构建数据处理管道,将原始文本字符串转换为可用于训练模型的torch.Tensor
  • 使用torch.utils.data.DataLoader对数据进行洗牌和迭代

先决条件

在运行教程之前,需要安装最新的portalocker包。例如,在 Colab 环境中,可以通过在脚本顶部添加以下行来完成:

!pip  install  -U  portalocker>=2.0.0` 

访问原始数据集迭代器

torchtext 库提供了一些原始数据集迭代器,可以产生原始文本字符串。例如,AG_NEWS数据集迭代器将原始数据作为标签和文本的元组产生。

要访问 torchtext 数据集,请按照github.com/pytorch/data上的说明安装 torchdata。

import torch
from torchtext.datasets import AG_NEWS

train_iter = iter(AG_NEWS(split="train")) 
next(train_iter)
>>>  (3,  "Fears for T N pension after talks Unions representing workers at Turner
Newall say they are 'disappointed' after talks with stricken parent firm Federal
Mogul.")

next(train_iter)
>>>  (4,  "The Race is On: Second Private Team Sets Launch Date for Human
Spaceflight (SPACE.com) SPACE.com - TORONTO, Canada -- A second\\team of
rocketeers competing for the  #36;10 million Ansari X Prize, a contest
for\\privately funded suborbital space flight, has officially announced
the first\\launch date for its manned rocket.")

next(train_iter)
>>>  (4,  'Ky. Company Wins Grant to Study Peptides (AP) AP - A company founded
by a chemistry researcher at the University of Louisville won a grant to develop
a method of producing better peptides, which are short chains of amino acids, the
building blocks of proteins.') 

准备数据处理管道

我们已经重新审视了 torchtext 库的非常基本组件,包括词汇表、词向量、分词器。这些是原始文本字符串的基本数据处理构建模块。

这是一个使用分词器和词汇表进行典型 NLP 数据处理的示例。第一步是使用原始训练数据集构建词汇表。在这里,我们使用内置的工厂函数build_vocab_from_iterator,它接受产生标记列表或标记迭代器的迭代器。用户还可以传递任何要添加到词汇表中的特殊符号。

from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator

tokenizer = get_tokenizer("basic_english")
train_iter = AG_NEWS(split="train")

def yield_tokens(data_iter):
    for _, text in data_iter:
        yield tokenizer(text)

vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=[""])
vocab.set_default_index(vocab[""]) 

词汇表块将标记列表转换为整数。

vocab(['here',  'is',  'an',  'example'])
>>>  [475,  21,  30,  5297] 

使用分词器和词汇表准备文本处理管道。文本和标签管道将用于处理数据集迭代器中的原始数据字符串。

text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1 

文本管道将文本字符串转换为基于词汇表中定义的查找表的整数列表。标签管道将标签转换为整数。例如,

text_pipeline('here is the an example')
>>>  [475,  21,  2,  30,  5297]
label_pipeline('10')
>>>  9 

生成数据批次和迭代器

对于 PyTorch 用户,建议使用torch.utils.data.DataLoader(教程在这里)。它适用于实现getitem()len()协议的映射样式数据集,并表示从索引/键到数据样本的映射。它还适用于具有False洗牌参数的可迭代数据集。

在发送到模型之前,collate_fn函数处理从DataLoader生成的样本批次。collate_fn的输入是DataLoader中的批量数据,collate_fn根据先前声明的数据处理管道对其进行处理。请注意,在这里确保collate_fn声明为顶级 def。这确保该函数在每个工作进程中都可用。

在此示例中,原始数据批次输入中的文本条目被打包成列表,并连接为nn.EmbeddingBag输入的单个张量。偏移量是一个分隔符张量,用于表示文本张量中各个序列的起始索引。标签是一个张量,保存各个文本条目的标签。

from torch.utils.data import DataLoader

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

def collate_batch(batch):
    label_list, text_list, offsets = [], [], [0]
    for _label, _text in batch:
        label_list.append(label_pipeline(_label))
        processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
        text_list.append(processed_text)
        offsets.append(processed_text.size(0))
    label_list = torch.tensor(label_list, dtype=torch.int64)
    offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
    text_list = torch.cat(text_list)
    return label_list.to(device), text_list.to(device), offsets.to(device)

train_iter = AG_NEWS(split="train")
dataloader = DataLoader(
    train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch
) 

定义模型

该模型由nn.EmbeddingBag层和一个用于分类目的的线性层组成。nn.EmbeddingBag默认模式为“mean”,计算“bag”中嵌入的平均值。虽然这里的文本条目长度不同,但nn.EmbeddingBag模块在这里不需要填充,因为文本长度保存在偏移量中。

此外,由于nn.EmbeddingBag在运行时累积嵌入的平均值,nn.EmbeddingBag可以增强性能和内存效率以处理一系列张量。

PyTorch 2.2 中文官方教程(七)_第1张图片

from torch import nn

class TextClassificationModel(nn.Module):
    def __init__(self, vocab_size, embed_dim, num_class):
        super(TextClassificationModel, self).__init__()
        self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=False)
        self.fc = nn.Linear(embed_dim, num_class)
        self.init_weights()

    def init_weights(self):
        initrange = 0.5
        self.embedding.weight.data.uniform_(-initrange, initrange)
        self.fc.weight.data.uniform_(-initrange, initrange)
        self.fc.bias.data.zero_()

    def forward(self, text, offsets):
        embedded = self.embedding(text, offsets)
        return self.fc(embedded) 

初始化一个实例

AG_NEWS数据集有四个标签,因此类别数为四。

1  :  World
2  :  Sports
3  :  Business
4  :  Sci/Tec 

我们构建了一个嵌入维度为 64 的模型。词汇量大小等于词汇实例的长度。类别数等于标签数,

train_iter = AG_NEWS(split="train")
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device) 

定义训练模型和评估结果的函数。

import time

def train(dataloader):
    model.train()
    total_acc, total_count = 0, 0
    log_interval = 500
    start_time = time.time()

    for idx, (label, text, offsets) in enumerate(dataloader):
        optimizer.zero_grad()
        predicted_label = model(text, offsets)
        loss = criterion(predicted_label, label)
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
        optimizer.step()
        total_acc += (predicted_label.argmax(1) == label).sum().item()
        total_count += label.size(0)
        if idx % log_interval == 0 and idx > 0:
            elapsed = time.time() - start_time
            print(
                "| epoch {:3d} | {:5d}/{:5d} batches "
                "| accuracy {:8.3f}".format(
                    epoch, idx, len(dataloader), total_acc / total_count
                )
            )
            total_acc, total_count = 0, 0
            start_time = time.time()

def evaluate(dataloader):
    model.eval()
    total_acc, total_count = 0, 0

    with torch.no_grad():
        for idx, (label, text, offsets) in enumerate(dataloader):
            predicted_label = model(text, offsets)
            loss = criterion(predicted_label, label)
            total_acc += (predicted_label.argmax(1) == label).sum().item()
            total_count += label.size(0)
    return total_acc / total_count 

拆分数据集并运行模型

由于原始的AG_NEWS没有有效数据集,我们将训练数据集拆分为训练/验证集,拆分比例为 0.95(训练)和 0.05(验证)。在这里,我们使用 PyTorch 核心库中的torch.utils.data.dataset.random_split函数。

CrossEntropyLoss标准将nn.LogSoftmax()nn.NLLLoss()结合在一个类中。在训练具有 C 类别的分类问题时很有用。SGD实现了随机梯度下降方法作为优化器。初始学习率设置为 5.0。这里使用StepLR来通过 epochs 调整学习率。

from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset

# Hyperparameters
EPOCHS = 10  # epoch
LR = 5  # learning rate
BATCH_SIZE = 64  # batch size for training

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = random_split(
    train_dataset, [num_train, len(train_dataset) - num_train]
)

train_dataloader = DataLoader(
    split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch
)
valid_dataloader = DataLoader(
    split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch
)
test_dataloader = DataLoader(
    test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch
)

for epoch in range(1, EPOCHS + 1):
    epoch_start_time = time.time()
    train(train_dataloader)
    accu_val = evaluate(valid_dataloader)
    if total_accu is not None and total_accu > accu_val:
        scheduler.step()
    else:
        total_accu = accu_val
    print("-" * 59)
    print(
        "| end of epoch {:3d} | time: {:5.2f}s | "
        "valid accuracy {:8.3f} ".format(
            epoch, time.time() - epoch_start_time, accu_val
        )
    )
    print("-" * 59) 
| epoch   1 |   500/ 1782 batches | accuracy    0.694
| epoch   1 |  1000/ 1782 batches | accuracy    0.856
| epoch   1 |  1500/ 1782 batches | accuracy    0.877
-----------------------------------------------------------
| end of epoch   1 | time: 11.29s | valid accuracy    0.886
-----------------------------------------------------------
| epoch   2 |   500/ 1782 batches | accuracy    0.898
| epoch   2 |  1000/ 1782 batches | accuracy    0.899
| epoch   2 |  1500/ 1782 batches | accuracy    0.906
-----------------------------------------------------------
| end of epoch   2 | time: 10.99s | valid accuracy    0.895
-----------------------------------------------------------
| epoch   3 |   500/ 1782 batches | accuracy    0.916
| epoch   3 |  1000/ 1782 batches | accuracy    0.913
| epoch   3 |  1500/ 1782 batches | accuracy    0.915
-----------------------------------------------------------
| end of epoch   3 | time: 10.97s | valid accuracy    0.894
-----------------------------------------------------------
| epoch   4 |   500/ 1782 batches | accuracy    0.930
| epoch   4 |  1000/ 1782 batches | accuracy    0.932
| epoch   4 |  1500/ 1782 batches | accuracy    0.929
-----------------------------------------------------------
| end of epoch   4 | time: 10.97s | valid accuracy    0.902
-----------------------------------------------------------
| epoch   5 |   500/ 1782 batches | accuracy    0.932
| epoch   5 |  1000/ 1782 batches | accuracy    0.933
| epoch   5 |  1500/ 1782 batches | accuracy    0.931
-----------------------------------------------------------
| end of epoch   5 | time: 10.92s | valid accuracy    0.902
-----------------------------------------------------------
| epoch   6 |   500/ 1782 batches | accuracy    0.933
| epoch   6 |  1000/ 1782 batches | accuracy    0.932
| epoch   6 |  1500/ 1782 batches | accuracy    0.935
-----------------------------------------------------------
| end of epoch   6 | time: 10.91s | valid accuracy    0.903
-----------------------------------------------------------
| epoch   7 |   500/ 1782 batches | accuracy    0.934
| epoch   7 |  1000/ 1782 batches | accuracy    0.933
| epoch   7 |  1500/ 1782 batches | accuracy    0.935
-----------------------------------------------------------
| end of epoch   7 | time: 10.90s | valid accuracy    0.903
-----------------------------------------------------------
| epoch   8 |   500/ 1782 batches | accuracy    0.935
| epoch   8 |  1000/ 1782 batches | accuracy    0.933
| epoch   8 |  1500/ 1782 batches | accuracy    0.935
-----------------------------------------------------------
| end of epoch   8 | time: 10.91s | valid accuracy    0.904
-----------------------------------------------------------
| epoch   9 |   500/ 1782 batches | accuracy    0.934
| epoch   9 |  1000/ 1782 batches | accuracy    0.934
| epoch   9 |  1500/ 1782 batches | accuracy    0.934
-----------------------------------------------------------
| end of epoch   9 | time: 10.90s | valid accuracy    0.904
-----------------------------------------------------------
| epoch  10 |   500/ 1782 batches | accuracy    0.934
| epoch  10 |  1000/ 1782 batches | accuracy    0.936
| epoch  10 |  1500/ 1782 batches | accuracy    0.933
-----------------------------------------------------------
| end of epoch  10 | time: 10.91s | valid accuracy    0.905
----------------------------------------------------------- 

用测试数据集评估模型

检查测试数据集的结果…

print("Checking the results of test dataset.")
accu_test = evaluate(test_dataloader)
print("test accuracy {:8.3f}".format(accu_test)) 
Checking the results of test dataset.
test accuracy    0.907 

在随机新闻上进行测试

使用迄今为止最佳模型并测试一条高尔夫新闻。

ag_news_label = {1: "World", 2: "Sports", 3: "Business", 4: "Sci/Tec"}

def predict(text, text_pipeline):
    with torch.no_grad():
        text = torch.tensor(text_pipeline(text))
        output = model(text, torch.tensor([0]))
        return output.argmax(1).item() + 1

ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
 enduring the season’s worst weather conditions on Sunday at The \
 Open on his way to a closing 75 at Royal Portrush, which \
 considering the wind and the rain was a respectable showing. \
 Thursday’s first round at the WGC-FedEx St. Jude Invitational \
 was another story. With temperatures in the mid-80s and hardly any \
 wind, the Spaniard was 13 strokes better in a flawless round. \
 Thanks to his best putting performance on the PGA Tour, Rahm \
 finished with an 8-under 62 for a three-stroke lead, which \
 was even more impressive considering he’d never played the \
 front nine at TPC Southwind."

model = model.to("cpu")

print("This is a %s news" % ag_news_label[predict(ex_text_str, text_pipeline)]) 
This is a Sports news 

脚本的总运行时间: (2 分钟 4.692 秒)

下载 Python 源代码:text_sentiment_ngrams_tutorial.py

下载 Jupyter 笔记本:text_sentiment_ngrams_tutorial.ipynb

Sphinx-Gallery 生成的图库

使用 nn.Transformer 和 torchtext 进行语言翻译

原文:pytorch.org/tutorials/beginner/translation_transformer.html

译者:飞龙

协议:CC BY-NC-SA 4.0

注意

点击这里下载完整示例代码

本教程展示了:

  • 如何使用 Transformer 从头开始训练翻译模型。

  • 使用 torchtext 库访问Multi30k数据集,以训练德语到英语的翻译模型。

数据获取和处理

torchtext 库提供了用于创建数据集的实用程序,可以轻松迭代,以便创建语言翻译模型。在这个例子中,我们展示了如何使用 torchtext 的内置数据集,对原始文本句子进行标记化,构建词汇表,并将标记数值化为张量。我们将使用torchtext 库中的 Multi30k 数据集,该数据集产生一对源-目标原始句子。

要访问 torchtext 数据集,请按照github.com/pytorch/data上的说明安装 torchdata。

from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
from torchtext.datasets import multi30k, Multi30k
from typing import Iterable, List

# We need to modify the URLs for the dataset since the links to the original dataset are broken
# Refer to https://github.com/pytorch/text/issues/1756#issuecomment-1163664163 for more info
multi30k.URL["train"] = "https://raw.githubusercontent.com/neychev/small_DL_repo/master/datasets/Multi30k/training.tar.gz"
multi30k.URL["valid"] = "https://raw.githubusercontent.com/neychev/small_DL_repo/master/datasets/Multi30k/validation.tar.gz"

SRC_LANGUAGE = 'de'
TGT_LANGUAGE = 'en'

# Place-holders
token_transform = {}
vocab_transform = {} 

创建源语言和目标语言的标记器。确保安装了依赖项。

pip install -U torchdata
pip install -U spacy
python -m spacy download en_core_web_sm
python -m spacy download de_core_news_sm 
token_transform[SRC_LANGUAGE] = get_tokenizer('spacy', language='de_core_news_sm')
token_transform[TGT_LANGUAGE] = get_tokenizer('spacy', language='en_core_web_sm')

# helper function to yield list of tokens
def yield_tokens(data_iter: Iterable, language: str) -> List[str]:
    language_index = {SRC_LANGUAGE: 0, TGT_LANGUAGE: 1}

    for data_sample in data_iter:
        yield token_transformlanguage

# Define special symbols and indices
UNK_IDX, PAD_IDX, BOS_IDX, EOS_IDX = 0, 1, 2, 3
# Make sure the tokens are in order of their indices to properly insert them in vocab
special_symbols = ['', '', '', '']

for ln in [SRC_LANGUAGE, TGT_LANGUAGE]:
    # Training data Iterator
    train_iter = Multi30k(split='train', language_pair=(SRC_LANGUAGE, TGT_LANGUAGE))
    # Create torchtext's Vocab object
    vocab_transform[ln] = build_vocab_from_iterator(yield_tokens(train_iter, ln),
                                                    min_freq=1,
                                                    specials=special_symbols,
                                                    special_first=True)

# Set ``UNK_IDX`` as the default index. This index is returned when the token is not found.
# If not set, it throws ``RuntimeError`` when the queried token is not found in the Vocabulary.
for ln in [SRC_LANGUAGE, TGT_LANGUAGE]:
  vocab_transform[ln].set_default_index(UNK_IDX) 

使用 Transformer 的 Seq2Seq 网络

Transformer 是一个 Seq2Seq 模型,介绍在“Attention is all you need”论文中,用于解决机器翻译任务。下面,我们将创建一个使用 Transformer 的 Seq2Seq 网络。该网络由三部分组成。第一部分是嵌入层。该层将输入索引的张量转换为相应的输入嵌入的张量。这些嵌入进一步与位置编码相结合,以向模型提供输入标记的位置信息。第二部分是实际的Transformer模型。最后,Transformer 模型的输出通过线性层传递,为目标语言中的每个标记提供未归一化的概率。

from torch import Tensor
import torch
import torch.nn as nn
from torch.nn import Transformer
import math
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# helper Module that adds positional encoding to the token embedding to introduce a notion of word order.
class PositionalEncoding(nn.Module):
    def __init__(self,
                 emb_size: int,
                 dropout: float,
                 maxlen: int = 5000):
        super(PositionalEncoding, self).__init__()
        den = torch.exp(- torch.arange(0, emb_size, 2)* math.log(10000) / emb_size)
        pos = torch.arange(0, maxlen).reshape(maxlen, 1)
        pos_embedding = torch.zeros((maxlen, emb_size))
        pos_embedding[:, 0::2] = torch.sin(pos * den)
        pos_embedding[:, 1::2] = torch.cos(pos * den)
        pos_embedding = pos_embedding.unsqueeze(-2)

        self.dropout = nn.Dropout(dropout)
        self.register_buffer('pos_embedding', pos_embedding)

    def forward(self, token_embedding: Tensor):
        return self.dropout(token_embedding + self.pos_embedding[:token_embedding.size(0), :])

# helper Module to convert tensor of input indices into corresponding tensor of token embeddings
class TokenEmbedding(nn.Module):
    def __init__(self, vocab_size: int, emb_size):
        super(TokenEmbedding, self).__init__()
        self.embedding = nn.Embedding(vocab_size, emb_size)
        self.emb_size = emb_size

    def forward(self, tokens: Tensor):
        return self.embedding(tokens.long()) * math.sqrt(self.emb_size)

# Seq2Seq Network
class Seq2SeqTransformer(nn.Module):
    def __init__(self,
                 num_encoder_layers: int,
                 num_decoder_layers: int,
                 emb_size: int,
                 nhead: int,
                 src_vocab_size: int,
                 tgt_vocab_size: int,
                 dim_feedforward: int = 512,
                 dropout: float = 0.1):
        super(Seq2SeqTransformer, self).__init__()
        self.transformer = Transformer(d_model=emb_size,
                                       nhead=nhead,
                                       num_encoder_layers=num_encoder_layers,
                                       num_decoder_layers=num_decoder_layers,
                                       dim_feedforward=dim_feedforward,
                                       dropout=dropout)
        self.generator = nn.Linear(emb_size, tgt_vocab_size)
        self.src_tok_emb = TokenEmbedding(src_vocab_size, emb_size)
        self.tgt_tok_emb = TokenEmbedding(tgt_vocab_size, emb_size)
        self.positional_encoding = PositionalEncoding(
            emb_size, dropout=dropout)

    def forward(self,
                src: Tensor,
                trg: Tensor,
                src_mask: Tensor,
                tgt_mask: Tensor,
                src_padding_mask: Tensor,
                tgt_padding_mask: Tensor,
                memory_key_padding_mask: Tensor):
        src_emb = self.positional_encoding(self.src_tok_emb(src))
        tgt_emb = self.positional_encoding(self.tgt_tok_emb(trg))
        outs = self.transformer(src_emb, tgt_emb, src_mask, tgt_mask, None,
                                src_padding_mask, tgt_padding_mask, memory_key_padding_mask)
        return self.generator(outs)

    def encode(self, src: Tensor, src_mask: Tensor):
        return self.transformer.encoder(self.positional_encoding(
                            self.src_tok_emb(src)), src_mask)

    def decode(self, tgt: Tensor, memory: Tensor, tgt_mask: Tensor):
        return self.transformer.decoder(self.positional_encoding(
                          self.tgt_tok_emb(tgt)), memory,
                          tgt_mask) 

在训练过程中,我们需要一个后续单词掩码,以防止模型在进行预测时查看未来的单词。我们还需要隐藏源和目标填充标记的掩码。下面,让我们定义一个函数,来处理这两个问题。

def generate_square_subsequent_mask(sz):
    mask = (torch.triu(torch.ones((sz, sz), device=DEVICE)) == 1).transpose(0, 1)
    mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
    return mask

def create_mask(src, tgt):
    src_seq_len = src.shape[0]
    tgt_seq_len = tgt.shape[0]

    tgt_mask = generate_square_subsequent_mask(tgt_seq_len)
    src_mask = torch.zeros((src_seq_len, src_seq_len),device=DEVICE).type(torch.bool)

    src_padding_mask = (src == PAD_IDX).transpose(0, 1)
    tgt_padding_mask = (tgt == PAD_IDX).transpose(0, 1)
    return src_mask, tgt_mask, src_padding_mask, tgt_padding_mask 

现在让我们定义模型的参数并实例化。下面,我们还定义了我们的损失函数,即交叉熵损失,以及用于训练的优化器。

torch.manual_seed(0)

SRC_VOCAB_SIZE = len(vocab_transform[SRC_LANGUAGE])
TGT_VOCAB_SIZE = len(vocab_transform[TGT_LANGUAGE])
EMB_SIZE = 512
NHEAD = 8
FFN_HID_DIM = 512
BATCH_SIZE = 128
NUM_ENCODER_LAYERS = 3
NUM_DECODER_LAYERS = 3

transformer = Seq2SeqTransformer(NUM_ENCODER_LAYERS, NUM_DECODER_LAYERS, EMB_SIZE,
                                 NHEAD, SRC_VOCAB_SIZE, TGT_VOCAB_SIZE, FFN_HID_DIM)

for p in transformer.parameters():
    if p.dim() > 1:
        nn.init.xavier_uniform_(p)

transformer = transformer.to(DEVICE)

loss_fn = torch.nn.CrossEntropyLoss(ignore_index=PAD_IDX)

optimizer = torch.optim.Adam(transformer.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9) 

整理

数据获取和处理部分所示,我们的数据迭代器产生一对原始字符串。我们需要将这些字符串对转换为批量张量,以便我们之前定义的Seq2Seq网络可以处理。下面我们定义我们的整理函数,将一批原始字符串转换为可以直接输入到我们的模型中的批量张量。

from torch.nn.utils.rnn import pad_sequence

# helper function to club together sequential operations
def sequential_transforms(*transforms):
    def func(txt_input):
        for transform in transforms:
            txt_input = transform(txt_input)
        return txt_input
    return func

# function to add BOS/EOS and create tensor for input sequence indices
def tensor_transform(token_ids: List[int]):
    return torch.cat((torch.tensor([BOS_IDX]),
                      torch.tensor(token_ids),
                      torch.tensor([EOS_IDX])))

# ``src`` and ``tgt`` language text transforms to convert raw strings into tensors indices
text_transform = {}
for ln in [SRC_LANGUAGE, TGT_LANGUAGE]:
    text_transform[ln] = sequential_transforms(token_transform[ln], #Tokenization
                                               vocab_transform[ln], #Numericalization
                                               tensor_transform) # Add BOS/EOS and create tensor

# function to collate data samples into batch tensors
def collate_fn(batch):
    src_batch, tgt_batch = [], []
    for src_sample, tgt_sample in batch:
        src_batch.append(text_transformSRC_LANGUAGE))
        tgt_batch.append(text_transformTGT_LANGUAGE))

    src_batch = pad_sequence(src_batch, padding_value=PAD_IDX)
    tgt_batch = pad_sequence(tgt_batch, padding_value=PAD_IDX)
    return src_batch, tgt_batch 

让我们定义训练和评估循环,每个时代都会调用它。

from torch.utils.data import DataLoader

def train_epoch(model, optimizer):
    model.train()
    losses = 0
    train_iter = Multi30k(split='train', language_pair=(SRC_LANGUAGE, TGT_LANGUAGE))
    train_dataloader = DataLoader(train_iter, batch_size=BATCH_SIZE, collate_fn=collate_fn)

    for src, tgt in train_dataloader:
        src = src.to(DEVICE)
        tgt = tgt.to(DEVICE)

        tgt_input = tgt[:-1, :]

        src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input)

        logits = model(src, tgt_input, src_mask, tgt_mask,src_padding_mask, tgt_padding_mask, src_padding_mask)

        optimizer.zero_grad()

        tgt_out = tgt[1:, :]
        loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1))
        loss.backward()

        optimizer.step()
        losses += loss.item()

    return losses / len(list(train_dataloader))

def evaluate(model):
    model.eval()
    losses = 0

    val_iter = Multi30k(split='valid', language_pair=(SRC_LANGUAGE, TGT_LANGUAGE))
    val_dataloader = DataLoader(val_iter, batch_size=BATCH_SIZE, collate_fn=collate_fn)

    for src, tgt in val_dataloader:
        src = src.to(DEVICE)
        tgt = tgt.to(DEVICE)

        tgt_input = tgt[:-1, :]

        src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input)

        logits = model(src, tgt_input, src_mask, tgt_mask,src_padding_mask, tgt_padding_mask, src_padding_mask)

        tgt_out = tgt[1:, :]
        loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1))
        losses += loss.item()

    return losses / len(list(val_dataloader)) 

现在我们有了训练模型所需的所有要素。让我们开始吧!

from timeit import default_timer as timer
NUM_EPOCHS = 18

for epoch in range(1, NUM_EPOCHS+1):
    start_time = timer()
    train_loss = train_epoch(transformer, optimizer)
    end_time = timer()
    val_loss = evaluate(transformer)
    print((f"Epoch: {epoch}, Train loss: {train_loss:.3f}, Val loss: {val_loss:.3f}, "f"Epoch time = {(end_time  -  start_time):.3f}s"))

# function to generate output sequence using greedy algorithm
def greedy_decode(model, src, src_mask, max_len, start_symbol):
    src = src.to(DEVICE)
    src_mask = src_mask.to(DEVICE)

    memory = model.encode(src, src_mask)
    ys = torch.ones(1, 1).fill_(start_symbol).type(torch.long).to(DEVICE)
    for i in range(max_len-1):
        memory = memory.to(DEVICE)
        tgt_mask = (generate_square_subsequent_mask(ys.size(0))
                    .type(torch.bool)).to(DEVICE)
        out = model.decode(ys, memory, tgt_mask)
        out = out.transpose(0, 1)
        prob = model.generator(out[:, -1])
        _, next_word = torch.max(prob, dim=1)
        next_word = next_word.item()

        ys = torch.cat([ys,
                        torch.ones(1, 1).type_as(src.data).fill_(next_word)], dim=0)
        if next_word == EOS_IDX:
            break
    return ys

# actual function to translate input sentence into target language
def translate(model: torch.nn.Module, src_sentence: str):
    model.eval()
    src = text_transformSRC_LANGUAGE.view(-1, 1)
    num_tokens = src.shape[0]
    src_mask = (torch.zeros(num_tokens, num_tokens)).type(torch.bool)
    tgt_tokens = greedy_decode(
        model,  src, src_mask, max_len=num_tokens + 5, start_symbol=BOS_IDX).flatten()
    return " ".join(vocab_transform[TGT_LANGUAGE].lookup_tokens(list(tgt_tokens.cpu().numpy()))).replace("", "").replace("", "") 
print(translate(transformer, "Eine Gruppe von Menschen steht vor einem Iglu .")) 

参考

  1. 注意力就是你所需要的论文。papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf

  2. 带注释的 Transformer。nlp.seas.harvard.edu/2018/04/03/attention.html#positional-encoding

脚本的总运行时间:(0 分钟 0.000 秒)

下载 Python 源代码:translation_transformer.py

下载 Jupyter 笔记本:translation_transformer.ipynb

Sphinx-Gallery 生成的画廊

使用 Torchtext 预处理自定义文本数据集

原文:pytorch.org/tutorials/beginner/torchtext_custom_dataset_tutorial.html

译者:飞龙

协议:CC BY-NC-SA 4.0

注意

点击这里下载完整示例代码

作者:Anupam Sharma

本教程演示了 torchtext 在非内置数据集上的用法。在本教程中,我们将预处理一个数据集,可以进一步用于训练用于机器翻译的序列到序列模型(类似于本教程中的内容:使用神经网络进行序列到序列学习),但不使用 torchtext 的旧版本。

在本教程中,我们将学习如何:

  • 读取数据集

  • 标记化句子

  • 对句子应用转换

  • 执行桶批处理

假设我们需要准备一个数据集来训练一个能够进行英语到德语翻译的模型。我们将使用Tatoeba Project提供的制表符分隔的德语 - 英语句对,可以从此链接下载。

其他语言的句子对可以在此链接找到。

设置

首先,下载数据集,提取 zip 文件,并记下文件 deu.txt 的路径。

确保已安装以下软件包:

  • Torchdata 0.6.0(安装说明)

  • Torchtext 0.15.0(安装说明)

  • Spacy

在这里,我们使用 Spacy 对文本进行标记化。简单来说,标记化意味着将句子转换为单词列表。Spacy 是一个用于各种自然语言处理(NLP)任务的 Python 包。

从 Spacy 下载英语和德语模型,如下所示:

python  -m  spacy  download  en_core_web_sm
python  -m  spacy  download  de_core_news_sm 

让我们从导入所需模块开始:

import torchdata.datapipes as dp
import torchtext.transforms as T
import spacy
from torchtext.vocab import build_vocab_from_iterator
eng = spacy.load("en_core_web_sm") # Load the English model to tokenize English text
de = spacy.load("de_core_news_sm") # Load the German model to tokenize German text 

现在我们将加载数据集

FILE_PATH = 'data/deu.txt'
data_pipe = dp.iter.IterableWrapper([FILE_PATH])
data_pipe = dp.iter.FileOpener(data_pipe, mode='rb')
data_pipe = data_pipe.parse_csv(skip_lines=0, delimiter='\t', as_tuple=True) 

在上述代码块中,我们正在做以下事情:

  1. 在第 2 行,我们正在创建一个文件名的可迭代对象

  2. 在第 3 行,我们将可迭代对象传递给 FileOpener,然后以读取模式打开文件

  3. 在第 4 行,我们调用一个函数来解析文件,该函数再次返回一个元组的可迭代对象,表示制表符分隔文件的每一行

DataPipes 可以被视为类似数据集对象的东西,我们可以在其上执行各种操作。查看此教程以获取有关 DataPipes 的更多详细信息。

我们可以验证可迭代对象是否包含句子对,如下所示:

for sample in data_pipe:
    print(sample)
    break 
('Go.', 'Geh.', 'CC-BY 2.0 (France) Attribution: tatoeba.org #2877272 (CM) & #8597805 (Roujin)') 

请注意,我们还有句子对的归属细节。我们将编写一个小函数来删除归属细节:

def removeAttribution(row):
  """
 Function to keep the first two elements in a tuple
 """
    return row[:2]
data_pipe = data_pipe.map(removeAttribution) 

上述代码块中第 6 行的 map 函数可用于在 data_pipe 的每个元素上应用某个函数。现在,我们可以验证 data_pipe 只包含句子对。

for sample in data_pipe:
    print(sample)
    break 
('Go.', 'Geh.') 

现在,让我们定义一些函数来执行标记化:

def engTokenize(text):
  """
 Tokenize an English text and return a list of tokens
 """
    return [token.text for token in eng.tokenizer(text)]

def deTokenize(text):
  """
 Tokenize a German text and return a list of tokens
 """
    return [token.text for token in de.tokenizer(text)] 

上述函数接受文本并返回如下所示的单词列表:

print(engTokenize("Have a good day!!!"))
print(deTokenize("Haben Sie einen guten Tag!!!")) 
['Have', 'a', 'good', 'day', '!', '!', '!']
['Haben', 'Sie', 'einen', 'guten', 'Tag', '!', '!', '!'] 

构建词汇表

让我们将英语句子作为源,德语句子作为目标。

词汇可以被视为数据集中我们拥有的唯一单词集合。我们现在将为源和目标构建词汇表。

让我们定义一个函数,从迭代器中的元组元素获取标记。

def getTokens(data_iter, place):
  """
 Function to yield tokens from an iterator. Since, our iterator contains
 tuple of sentences (source and target), `place` parameters defines for which
 index to return the tokens for. `place=0` for source and `place=1` for target
 """
    for english, german in data_iter:
        if place == 0:
            yield engTokenize(english)
        else:
            yield deTokenize(german) 

现在,我们将为源构建词汇表:

source_vocab = build_vocab_from_iterator(
    getTokens(data_pipe,0),
    min_freq=2,
    specials= ['', '', '', ''],
    special_first=True
)
source_vocab.set_default_index(source_vocab['']) 

上面的代码从迭代器构建词汇表。在上述代码块中:

  • 在第 2 行,我们调用 getTokens()函数,并将 place=0,因为我们需要源句子的词汇表。

  • 在第 3 行,我们设置 min_freq=2。这意味着该函数将跳过出现少于 2 次的单词。

  • 在第 4 行,我们指定了一些特殊标记:

    • 表示句子的开始

    • 表示句子结束

    • 表示未知单词。一个未知单词的示例是由于 min_freq=2 而被跳过的单词。

    • 是填充标记。在训练模型时,我们大多数情况下是以批量的形式训练。在一个批次中,可能会有不同长度的句子。因此,我们用标记填充较短的句子,使批次中所有序列的长度相等。

  • 在第 5 行,我们设置 special_first=True。这意味着将在词汇表中得到索引 0,得到索引 1,得到索引 2,将在词汇表中得到索引 3。

  • 在第 7 行,我们将默认索引设置为的索引。这意味着如果某个单词不在词汇表中,我们将使用代替该未知单词。

类似地,我们将为目标句子构建词汇表:

target_vocab = build_vocab_from_iterator(
    getTokens(data_pipe,1),
    min_freq=2,
    specials= ['', '', '', ''],
    special_first=True
)
target_vocab.set_default_index(target_vocab['']) 

请注意,上面的示例显示了如何向我们的词汇表添加特殊标记。特殊标记可能会根据需求而变化。

现在,我们可以验证特殊标记是放在开头的,然后是其他单词。在下面的代码中,source_vocab.get_itos()返回一个基于词汇表的索引的标记列表。

print(source_vocab.get_itos()[:9]) 
['', '', '', '', '.', 'I', 'Tom', 'to', 'you'] 

使用词汇表对句子进行数字化

构建词汇表后,我们需要将我们的句子转换为相应的索引。让我们为此定义一些函数:

def getTransform(vocab):
  """
 Create transforms based on given vocabulary. The returned transform is applied to sequence
 of tokens.
 """
    text_tranform = T.Sequential(
        ## converts the sentences to indices based on given vocabulary
        T.VocabTransform(vocab=vocab),
        ## Add  at beginning of each sentence. 1 because the index for  in vocabulary is
        # 1 as seen in previous section
        T.AddToken(1, begin=True),
        ## Add  at beginning of each sentence. 2 because the index for  in vocabulary is
        # 2 as seen in previous section
        T.AddToken(2, begin=False)
    )
    return text_tranform 

现在,让我们看看如何使用上述函数。该函数返回一个 Transforms 对象,我们将在我们的句子上使用它。让我们取一个随机句子并检查转换的工作方式。

temp_list = list(data_pipe)
some_sentence = temp_list[798][0]
print("Some sentence=", end="")
print(some_sentence)
transformed_sentence = getTransform(source_vocab)(engTokenize(some_sentence))
print("Transformed sentence=", end="")
print(transformed_sentence)
index_to_string = source_vocab.get_itos()
for index in transformed_sentence:
    print(index_to_string[index], end=" ") 
Some sentence=I fainted.
Transformed sentence=[1, 5, 2897, 4, 2]
<sos> I fainted . <eos> 

在上面的代码中:

  • 在第 2 行,我们从在第 1 行从 data_pipe 创建的列表中取一个源句子

  • 在第 5 行,我们根据源词汇表获取一个转换,并将其应用于一个标记化的句子。请注意,转换接受单词列表而不是句子。

  • 在第 8 行,我们获取索引到字符串的映射,然后使用它来获取转换后的句子

现在我们将使用 DataPipe 函数来对所有句子应用转换。让我们为此定义一些更多的函数。

def applyTransform(sequence_pair):
  """
 Apply transforms to sequence of tokens in a sequence pair
 """

    return (
        getTransform(source_vocab)(engTokenize(sequence_pair[0])),
        getTransform(target_vocab)(deTokenize(sequence_pair[1]))
    )
data_pipe = data_pipe.map(applyTransform) ## Apply the function to each element in the iterator
temp_list = list(data_pipe)
print(temp_list[0]) 
([1, 616, 4, 2], [1, 739, 4, 2]) 

制作批次(使用 bucket batch)

通常,我们以批量的形式训练模型。在为序列到序列模型工作时,建议保持批次中序列的长度相似。为此,我们将使用 data_pipe 的 bucketbatch 函数。

让我们定义一些将被 bucketbatch 函数使用的函数。

def sortBucket(bucket):
  """
 Function to sort a given bucket. Here, we want to sort based on the length of
 source and target sequence.
 """
    return sorted(bucket, key=lambda x: (len(x[0]), len(x[1]))) 

现在,我们将应用 bucketbatch 函数:

data_pipe = data_pipe.bucketbatch(
    batch_size = 4, batch_num=5,  bucket_num=1,
    use_in_batch_shuffle=False, sort_key=sortBucket
) 

在上面的代码块中:

  • 我们保持批量大小为 4。
  • batch_num 是要在桶中保留的批次数
  • bucket_num 是要在池中保留的桶数以进行洗牌。
  • sort_key 指定一个函数,该函数接受一个桶并对其进行排序

现在,让我们将一批源句子表示为 X,将一批目标句子表示为 y。通常,在训练模型时,我们对一批 X 进行预测,并将结果与 y 进行比较。但是,在我们的 data_pipe 中,一个批次的形式是[(X_1,y_1), (X_2,y_2), (X_3,y_3), (X_4,y_4)]:

print(list(data_pipe)[0]) 
[([1, 11105, 17, 4, 2], [1, 507, 29, 24, 2]), ([1, 11105, 17, 4, 2], [1, 7994, 1487, 24, 2]), ([1, 5335, 21, 4, 2], [1, 6956, 32, 24, 2]), ([1, 5335, 21, 4, 2], [1, 16003, 32, 24, 2])] 

因此,我们现在将把它们转换为这种形式:((X_1,X_2,X_3,X_4),(y_1,y_2,y_3,y_4))。为此,我们将编写一个小函数:

def separateSourceTarget(sequence_pairs):
  """
 input of form: `[(X_1,y_1), (X_2,y_2), (X_3,y_3), (X_4,y_4)]`
 output of form: `((X_1,X_2,X_3,X_4), (y_1,y_2,y_3,y_4))`
 """
    sources,targets = zip(*sequence_pairs)
    return sources,targets

## Apply the function to each element in the iterator
data_pipe = data_pipe.map(separateSourceTarget)
print(list(data_pipe)[0]) 
(([1, 6860, 23, 10, 2], [1, 6860, 23, 10, 2], [1, 29, 466, 4, 2], [1, 29, 466, 4, 2]), ([1, 20825, 8, 2], [1, 11118, 8, 2], [1, 31, 1152, 4, 2], [1, 31, 1035, 4, 2])) 

现在,我们已经得到了所需的数据。

填充

如前所述,在构建词汇表时,我们需要填充批次中较短的句子,以使批次中所有序列的长度相等。我们可以按以下方式执行填充:

def applyPadding(pair_of_sequences):
  """
 Convert sequences to tensors and apply padding
 """
    return (T.ToTensor(0)(list(pair_of_sequences[0])), T.ToTensor(0)(list(pair_of_sequences[1])))
## `T.ToTensor(0)` returns a transform that converts the sequence to `torch.tensor` and also applies
# padding. Here, `0` is passed to the constructor to specify the index of the `` token in the
# vocabulary.
data_pipe = data_pipe.map(applyPadding) 

现在,我们可以使用索引到字符串映射来查看序列如何以标记而不是索引的形式呈现:

source_index_to_string = source_vocab.get_itos()
target_index_to_string = target_vocab.get_itos()

def showSomeTransformedSentences(data_pipe):
  """
 Function to show how the sentences look like after applying all transforms.
 Here we try to print actual words instead of corresponding index
 """
    for sources,targets in data_pipe:
        if sources[0][-1] != 0:
            continue # Just to visualize padding of shorter sentences
        for i in range(4):
            source = ""
            for token in sources[i]:
                source += " " + source_index_to_string[token]
            target = ""
            for token in targets[i]:
                target += " " + target_index_to_string[token]
            print(f"Source: {source}")
            print(f"Traget: {target}")
        break

showSomeTransformedSentences(data_pipe) 
Source:  <sos> Freeze ! <eos> <pad>
Traget:  <sos> Stehenbleiben ! <eos> <pad>
Source:  <sos> <unk> ! <eos> <pad>
Traget:  <sos> Zum Wohl ! <eos>
Source:  <sos> Freeze ! <eos> <pad>
Traget:  <sos> Keine Bewegung ! <eos>
Source:  <sos> Got it ! <eos>
Traget:  <sos> Verstanden ! <eos> <pad> 

在上面的输出中,我们可以观察到较短的句子被填充为。现在,我们可以在编写训练函数时使用 data_pipe。

本教程的部分内容受到了这篇文章的启发。

脚本的总运行时间:(4 分钟 41.756 秒)

下载 Python 源代码:torchtext_custom_dataset_tutorial.py

下载 Jupyter 笔记本:torchtext_custom_dataset_tutorial.ipynb

由 Sphinx-Gallery 生成的图库

后端

ONNX 简介

原文:pytorch.org/tutorials/beginner/onnx/intro_onnx.html

译者:飞龙

协议:CC BY-NC-SA 4.0

注意

点击这里下载完整示例代码

ONNX 简介 || 将 PyTorch 模型导出为 ONNX || 扩展 ONNX 注册表

作者:Thiago Crepaldi

Open Neural Network eXchange (ONNX)是一个用于表示机器学习模型的开放标准格式。torch.onnx模块提供了 API,用于从本机 PyTorch torch.nn.Module模型中捕获计算图,并将其转换为ONNX 图。

导出的模型可以被支持 ONNX 的许多runtime之一消费,包括微软的ONNX Runtime。

注意

目前,有两种 ONNX 导出器 API 的风格,但本教程将专注于torch.onnx.dynamo_export

TorchDynamo 引擎利用 Python 的帧评估 API 并动态重写其字节码到FX 图。生成的 FX 图在最终转换为ONNX 图之前进行了优化。

这种方法的主要优势在于使用字节码分析捕获了FX 图,保留了模型的动态特性,而不是使用传统的静态追踪技术。

依赖项

需要 PyTorch 2.1.0 或更新版本。

ONNX 导出器依赖于额外的 Python 包:

  • ONNX标准库
  • ONNX Script库,使开发人员能够使用 Python 的子集以富有表现力且简单的方式编写 ONNX 操作符、函数和模型。

它们可以通过pip安装:

pip  install  --upgrade  onnx  onnxscript 

要验证安装,请运行以下命令:

import torch
print(torch.__version__)

import onnxscript
print(onnxscript.__version__)

from onnxscript import opset18  # opset 18 is the latest (and only) supported version for now

import onnxruntime
print(onnxruntime.__version__) 

每个导入必须成功且没有任何错误,并且必须打印出库版本。

进一步阅读

下面的列表涵盖了从基本示例到高级场景的教程,不一定按照列出的顺序。随意跳转到您感兴趣的特定主题,或者坐下来享受浏览所有内容,以了解关于 ONNX 导出器的所有内容。

1. 将 PyTorch 模型导出为 ONNX2. 扩展 ONNX 注册表

脚本的总运行时间:(0 分钟 0.000 秒)

下载 Python 源代码:intro_onnx.py

下载 Jupyter 笔记本:intro_onnx.ipynb

Sphinx-Gallery 生成的图库

强化学习

强化学习(DQN)教程

原文:pytorch.org/tutorials/intermediate/reinforcement_q_learning.html

译者:飞龙

协议:CC BY-NC-SA 4.0

注意

点击这里下载完整示例代码

作者:Adam Paszke

Mark Towers

本教程展示了如何使用 PyTorch 在 CartPole-v1 任务上训练深度 Q 学习(DQN)代理,来自Gymnasium。

任务

代理必须在两个动作之间做出决定 - 将小车向左或向右移动 - 以使连接到其上的杆保持竖直。您可以在Gymnasium 的网站上找到有关环境和其他更具挑战性的环境的更多信息。

PyTorch 2.2 中文官方教程(七)_第2张图片

CartPole

当代理观察到环境的当前状态并选择一个动作时,环境会转换到一个新状态,并返回一个指示动作后果的奖励。在这个任务中,每个增量时间步的奖励为+1,如果杆倒下太远或小车离中心移动超过 2.4 个单位,环境将终止。这意味着表现更好的情况将运行更长时间,累积更大的回报。

CartPole 任务设计为代理的输入是 4 个实数值,代表环境状态(位置、速度等)。我们将这 4 个输入不经过任何缩放,通过一个小型全连接网络,输出 2 个值,分别对应两个动作。网络被训练来预测每个动作的期望值,给定输入状态。然后选择具有最高期望值的动作。

首先,让我们导入所需的包。首先,我们需要gymnasium用于环境,通过 pip 安装。这是原始 OpenAI Gym 项目的一个分支,自 Gym v0.19 以来由同一团队维护。如果您在 Google Colab 中运行此代码,请运行:

%%bash
pip3  install  gymnasium[classic_control] 

我们还将使用 PyTorch 中的以下内容:

  • 神经网络(torch.nn

  • 优化(torch.optim

  • 自动微分(torch.autograd

import gymnasium as gym
import math
import random
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple, deque
from itertools import count

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

env = gym.make("CartPole-v1")

# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
    from IPython import display

plt.ion()

# if GPU is to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 

重放内存

我们将使用经验重放内存来训练我们的 DQN。它存储代理观察到的转换,允许我们稍后重用这些数据。通过随机抽样,构成一个批次的转换是不相关的。已经证明这极大地稳定和改进了 DQN 训练过程。

为此,我们将需要两个类:

  • Transition - 一个命名元组,表示环境中的单个转换。它基本上将(状态、动作)对映射到它们的(下一个状态、奖励)结果,其中状态是后面描述的屏幕差异图像。

  • ReplayMemory - 一个有界大小的循环缓冲区,保存最近观察到的转换。它还实现了一个.sample()方法,用于选择用于训练的随机批次的转换。

Transition = namedtuple('Transition',
                        ('state', 'action', 'next_state', 'reward'))

class ReplayMemory(object):

    def __init__(self, capacity):
        self.memory = deque([], maxlen=capacity)

    def push(self, *args):
  """Save a transition"""
        self.memory.append(Transition(*args))

    def sample(self, batch_size):
        return random.sample(self.memory, batch_size)

    def __len__(self):
        return len(self.memory) 

现在,让我们定义我们的模型。但首先,让我们快速回顾一下什么是 DQN。

DQN 算法

我们的环境是确定性的,因此这里呈现的所有方程也是确定性的,为简单起见。在强化学习文献中,它们还会包含对环境中随机转换的期望。

我们的目标是训练一个策略,试图最大化折现的累积奖励 R t 0 = ∑ t = t 0 ∞ γ t − t 0 r t R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t Rt0=t=t0γtt0rt,其中 R t 0 R_{t_0} Rt0也被称为回报。折现率 γ \gamma γ应该是一个在 0 0 0 1 1 1之间的常数,以确保总和收敛。较低的 γ \gamma γ使得来自不确定的遥远未来的奖励对我们的代理不那么重要,而对于它可以相当自信的近期未来的奖励更为重要。它还鼓励代理收集比未来时间相对较远的等价奖励更接近的奖励。

训练循环

这个单元格实例化了我们的模型及其优化器,并定义了一些实用程序:

对于我们的训练更新规则,我们将使用一个事实,即某个策略的每个 Q Q Q函数都遵守贝尔曼方程:

plot_durations - 用于绘制每一集的持续时间,以及最近 100 集的平均值(官方评估中使用的度量)。绘图将位于包含主训练循环的单元格下方,并将在每一集之后更新。

在这里,您可以找到一个optimize_model函数,执行优化的单个步骤。它首先对一批进行采样,将所有张量连接成一个张量,计算 Q ( s t , a t ) Q(s_t, a_t) Q(st,at) V ( s t + 1 ) = max ⁡ a Q ( s t + 1 , a ) V(s_{t+1}) = \max_a Q(s_{t+1}, a) V(st+1)=maxaQ(st+1,a),并将它们组合成我们的损失。根据定义,如果 s s s是一个终止状态,则我们设置 V ( s ) = 0 V(s) = 0 V(s)=0。我们还使用一个目标网络来计算 V ( s t + 1 ) V(s_{t+1}) V(st+1)以增加稳定性。目标网络在每一步都会进行更新,使用由超参数TAU控制的软更新,这个超参数之前已经定义过。

我们的模型将是一个前馈神经网络,它接收当前和前一个屏幕补丁之间的差异。它有两个输出,表示 Q ( s , l e f t ) Q(s, \mathrm{left}) Q(s,left) Q ( s , r i g h t ) Q(s, \mathrm{right}) Q(s,right)(其中 s s s是网络的输入)。实际上,网络试图预测在给定当前输入时采取每个动作的预期回报

最后,训练我们的模型的代码。

为了最小化这个误差,我们将使用Huber 损失。当误差很小时,Huber 损失的作用类似于均方误差,但当误差很大时,它的作用类似于平均绝对误差 - 这使得在估计 Q Q Q非常嘈杂时更加健壮。我们在从重放内存中采样的一批转换 B B B上计算这个损失:

L = 1 ∣ B ∣ ∑ ( s , a , s ′ , r )   ∈   B L ( δ ) \mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta) L=B1(s,a,s,r)  BL(δ)

其中 L ( δ ) = { 1 2 δ 2 对于 ∣ δ ∣ ≤ 1 , ∣ δ ∣ − 1 2 否则。 \text{其中} \quad \mathcal{L}(\delta) = \begin{cases} \frac{1}{2}{\delta²} & \text{对于} |\delta| \le 1, \\ |\delta| - \frac{1}{2} & \text{否则。} \end{cases} 其中L(δ)={21δ2δ21对于δ1,否则。

select_action - 将根据ε贪婪策略选择一个动作。简单来说,我们有时会使用我们的模型来选择动作,有时我们只是均匀地随机采样一个。选择随机动作的概率将从EPS_START开始指数衰减到EPS_ENDEPS_DECAY控制衰减的速率。

class DQN(nn.Module):

    def __init__(self, n_observations, n_actions):
        super(DQN, self).__init__()
        self.layer1 = nn.Linear(n_observations, 128)
        self.layer2 = nn.Linear(128, 128)
        self.layer3 = nn.Linear(128, n_actions)

    # Called with either one element to determine next action, or a batch
    # during optimization. Returns tensor([[left0exp,right0exp]...]).
    def forward(self, x):
        x = F.relu(self.layer1(x))
        x = F.relu(self.layer2(x))
        return self.layer3(x) 

π ∗ ( s ) = arg ⁡  ⁣ max ⁡ a   Q ∗ ( s , a ) \pi^*(s) = \arg\!\max_a \ Q^*(s, a) π(s)=argamax Q(s,a)

然而,我们并不知道关于世界的一切,所以我们没有 Q ∗ Q^* Q的访问权限。但是,由于神经网络是通用函数逼近器,我们可以简单地创建一个神经网络并训练它以类似于 Q ∗ Q^* Q。超参数和实用程序

δ = Q ( s , a ) − ( r + γ max ⁡ a ′ Q ( s ′ , a ) ) \delta = Q(s, a) - (r + \gamma \max_a' Q(s', a)) δ=Q(s,a)(r+γamaxQ(s,a))

  • 训练

  • Q 学习的主要思想是,如果我们有一个函数 Q ∗ : S t a t e × A c t i o n → R Q^*: State \times Action \rightarrow \mathbb{R} Q:State×ActionR,可以告诉我们,如果我们在给定状态下采取一个动作,我们的回报将是多少,那么我们可以轻松构建一个最大化奖励的策略:

# BATCH_SIZE is the number of transitions sampled from the replay buffer
# GAMMA is the discount factor as mentioned in the previous section
# EPS_START is the starting value of epsilon
# EPS_END is the final value of epsilon
# EPS_DECAY controls the rate of exponential decay of epsilon, higher means a slower decay
# TAU is the update rate of the target network
# LR is the learning rate of the ``AdamW`` optimizer
BATCH_SIZE = 128
GAMMA = 0.99
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 1000
TAU = 0.005
LR = 1e-4

# Get number of actions from gym action space
n_actions = env.action_space.n
# Get the number of state observations
state, info = env.reset()
n_observations = len(state)

policy_net = DQN(n_observations, n_actions).to(device)
target_net = DQN(n_observations, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())

optimizer = optim.AdamW(policy_net.parameters(), lr=LR, amsgrad=True)
memory = ReplayMemory(10000)

steps_done = 0

def select_action(state):
    global steps_done
    sample = random.random()
    eps_threshold = EPS_END + (EPS_START - EPS_END) * \
        math.exp(-1. * steps_done / EPS_DECAY)
    steps_done += 1
    if sample > eps_threshold:
        with torch.no_grad():
            # t.max(1) will return the largest column value of each row.
            # second column on max result is index of where max element was
            # found, so we pick action with the larger expected reward.
            return policy_net(state).max(1).indices.view(1, 1)
    else:
        return torch.tensor([[env.action_space.sample()]], device=device, dtype=torch.long)

episode_durations = []

def plot_durations(show_result=False):
    plt.figure(1)
    durations_t = torch.tensor(episode_durations, dtype=torch.float)
    if show_result:
        plt.title('Result')
    else:
        plt.clf()
        plt.title('Training...')
    plt.xlabel('Episode')
    plt.ylabel('Duration')
    plt.plot(durations_t.numpy())
    # Take 100 episode averages and plot them too
    if len(durations_t) >= 100:
        means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
        means = torch.cat((torch.zeros(99), means))
        plt.plot(means.numpy())

    plt.pause(0.001)  # pause a bit so that plots are updated
    if is_ipython:
        if not show_result:
            display.display(plt.gcf())
            display.clear_output(wait=True)
        else:
            display.display(plt.gcf()) 

等式两边之间的差异被称为时间差分误差 δ \delta δ

Q π ( s , a ) = r + γ Q π ( s ′ , π ( s ′ ) ) Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s')) Qπ(s,a)=r+γQπ(s,π(s))

Q 网络

def optimize_model():
    if len(memory) < BATCH_SIZE:
        return
    transitions = memory.sample(BATCH_SIZE)
    # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
    # detailed explanation). This converts batch-array of Transitions
    # to Transition of batch-arrays.
    batch = Transition(*zip(*transitions))

    # Compute a mask of non-final states and concatenate the batch elements
    # (a final state would've been the one after which simulation ended)
    non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
                                          batch.next_state)), device=device, dtype=torch.bool)
    non_final_next_states = torch.cat([s for s in batch.next_state
                                                if s is not None])
    state_batch = torch.cat(batch.state)
    action_batch = torch.cat(batch.action)
    reward_batch = torch.cat(batch.reward)

    # Compute Q(s_t, a) - the model computes Q(s_t), then we select the
    # columns of actions taken. These are the actions which would've been taken
    # for each batch state according to policy_net
    state_action_values = policy_net(state_batch).gather(1, action_batch)

    # Compute V(s_{t+1}) for all next states.
    # Expected values of actions for non_final_next_states are computed based
    # on the "older" target_net; selecting their best reward with max(1).values
    # This is merged based on the mask, such that we'll have either the expected
    # state value or 0 in case the state was final.
    next_state_values = torch.zeros(BATCH_SIZE, device=device)
    with torch.no_grad():
        next_state_values[non_final_mask] = target_net(non_final_next_states).max(1).values
    # Compute the expected Q values
    expected_state_action_values = (next_state_values * GAMMA) + reward_batch

    # Compute Huber loss
    criterion = nn.SmoothL1Loss()
    loss = criterion(state_action_values, expected_state_action_values.unsqueeze(1))

    # Optimize the model
    optimizer.zero_grad()
    loss.backward()
    # In-place gradient clipping
    torch.nn.utils.clip_grad_value_(policy_net.parameters(), 100)
    optimizer.step() 

下面,您可以找到主要的训练循环。在开始时,我们重置环境并获取初始的state张量。然后,我们采样一个动作,执行它,观察下一个状态和奖励(始终为 1),并优化我们的模型一次。当 episode 结束时(我们的模型失败),我们重新开始循环。

如果有 GPU 可用,则将 num_episodes 设置为 600,否则将安排 50 个 episodes,以便训练不会太长。然而,50 个 episodes 对于观察 CartPole 的良好性能是不足够的。您应该看到模型在 600 个训练 episodes 内不断达到 500 步。训练 RL 代理可能是一个嘈杂的过程,因此如果没有观察到收敛,重新开始训练可能会产生更好的结果。

if torch.cuda.is_available():
    num_episodes = 600
else:
    num_episodes = 50

for i_episode in range(num_episodes):
    # Initialize the environment and get its state
    state, info = env.reset()
    state = torch.tensor(state, dtype=torch.float32, device=device).unsqueeze(0)
    for t in count():
        action = select_action(state)
        observation, reward, terminated, truncated, _ = env.step(action.item())
        reward = torch.tensor([reward], device=device)
        done = terminated or truncated

        if terminated:
            next_state = None
        else:
            next_state = torch.tensor(observation, dtype=torch.float32, device=device).unsqueeze(0)

        # Store the transition in memory
        memory.push(state, action, next_state, reward)

        # Move to the next state
        state = next_state

        # Perform one step of the optimization (on the policy network)
        optimize_model()

        # Soft update of the target network's weights
        # θ′ ← τ θ + (1 −τ )θ′
        target_net_state_dict = target_net.state_dict()
        policy_net_state_dict = policy_net.state_dict()
        for key in policy_net_state_dict:
            target_net_state_dict[key] = policy_net_state_dict[key]*TAU + target_net_state_dict[key]*(1-TAU)
        target_net.load_state_dict(target_net_state_dict)

        if done:
            episode_durations.append(t + 1)
            plot_durations()
            break

print('Complete')
plot_durations(show_result=True)
plt.ioff()
plt.show() 

PyTorch 2.2 中文官方教程(七)_第3张图片

/opt/conda/envs/py_3.10/lib/python3.10/site-packages/gymnasium/utils/passive_env_checker.py:249: DeprecationWarning:

`np.bool8` is a deprecated alias for `np.bool_`.  (Deprecated NumPy 1.24)

Complete 

这是说明整体结果数据流的图表。

PyTorch 2.2 中文官方教程(七)_第4张图片

动作是随机选择的,或者基于策略选择,从 gym 环境中获取下一步样本。我们将结果记录在重放内存中,并在每次迭代中运行优化步骤。优化从重放内存中选择一个随机批次来训练新策略。在优化中还使用“较旧”的 target_net 来计算预期的 Q 值。其权重的软更新在每一步中执行。

脚本的总运行时间:(12 分钟 45.506 秒)

下载 Python 源代码:reinforcement_q_learning.py

下载 Jupyter 笔记本:reinforcement_q_learning.ipynb

Sphinx-Gallery 生成的图库

使用 TorchRL 的强化学习(PPO)教程

原文:pytorch.org/tutorials/intermediate/reinforcement_ppo.html

译者:飞龙

协议:CC BY-NC-SA 4.0

注意

点击这里下载完整示例代码

作者:Vincent Moens

本教程演示了如何使用 PyTorch 和torchrl来训练一个参数化策略网络,以解决来自OpenAI-Gym/Farama-Gymnasium 控制库的倒立摆任务。

PyTorch 2.2 中文官方教程(七)_第5张图片

倒立摆

关键学习:

  • 如何在 TorchRL 中创建环境,转换其输出,并从该环境收集数据;

  • 如何使用TensorDict使您的类彼此通信;

  • 使用 TorchRL 构建训练循环的基础知识:

    • 如何计算策略梯度方法的优势信号;

    • 如何使用概率神经网络创建随机策略;

    • 如何创建一个动态回放缓冲区,并从中进行无重复采样。

我们将介绍 TorchRL 的六个关键组件:

  • 环境

  • 转换

  • 模型(策略和价值函数)

  • 损失模块

  • 数据收集器

  • 回放缓冲区

如果您在 Google Colab 中运行此代码,请确保安装以下依赖项:

!pip3  install  torchrl
!pip3  install  gym[mujoco]
!pip3  install  tqdm 

Proximal Policy Optimization(PPO)是一种策略梯度算法,其中收集一批数据,并直接用于训练策略以最大化给定一些近似约束条件下的预期回报。您可以将其视为REINFORCE的复杂版本,这是基础策略优化算法。有关更多信息,请参阅Proximal Policy Optimization Algorithms论文。

PPO 通常被认为是一种快速高效的在线、基于策略的强化学习算法。TorchRL 提供了一个损失模块,可以为您完成所有工作,这样您就可以依赖这个实现,专注于解决问题,而不是每次想要训练策略时都要重新发明轮子。

为了完整起见,这里简要概述了损失的计算过程,尽管这已经由我们的ClipPPOLoss模块处理——算法的工作方式如下:1. 通过在环境中执行策略一定数量的步骤来采样一批数据。2. 然后,我们将使用该批次的随机子样本执行一定数量的优化步骤,使用 REINFORCE 损失的剪切版本。3. 剪切将对我们的损失设置一个悲观的边界:较低的回报估计将优先于较高的回报估计。损失的精确公式如下:

L ( s , a , θ k , θ ) = min ⁡ ( π θ ( a ∣ s ) π θ k ( a ∣ s ) A π θ k ( s , a ) ,      g ( ϵ , A π θ k ( s , a ) ) ) , L(s,a,\theta_k,\theta) = \min\left( \frac{\pi_{\theta}(a|s)}{\pi_{\theta_k}(a|s)} A^{\pi_{\theta_k}}(s,a), \;\; g(\epsilon, A^{\pi_{\theta_k}}(s,a)) \right), L(s,a,θk,θ)=min(πθk(as)πθ(as)Aπθk(s,a),g(ϵ,Aπθk(s,a))),

在该损失中有两个组件:在最小运算符的第一部分中,我们简单地计算 REINFORCE 损失的重要性加权版本(例如,我们已经校正了当前策略配置滞后于用于数据收集的策略配置的事实)。最小运算符的第二部分是一个类似的损失,当比率超过或低于给定的一对阈值时,我们对比率进行了剪切。

这种损失确保了无论优势是正面还是负面,都会阻止会导致与先前配置产生显著变化的策略更新。

本教程结构如下:

  1. 首先,我们将定义一组用于训练的超参数。

  2. 接下来,我们将专注于使用 TorchRL 的包装器和转换器创建我们的环境或模拟器。

  3. 接下来,我们将设计策略网络和价值模型,这对于损失函数是必不可少的。这些模块将用于配置我们的损失模块。

  4. 接下来,我们将创建重放缓冲区和数据加载器。

  5. 最后,我们将运行训练循环并分析结果。

在本教程中,我们将使用tensordict库。TensorDict是 TorchRL 的通用语言:它帮助我们抽象出模块读取和写入的内容,更少关心具体的数据描述,更多关注算法本身。

from collections import defaultdict

import matplotlib.pyplot as plt
import torch
from tensordict.nn import TensorDictModule
from tensordict.nn.distributions import NormalParamExtractor
from torch import nn
from torchrl.collectors import SyncDataCollector
from torchrl.data.replay_buffers import ReplayBuffer
from torchrl.data.replay_buffers.samplers import SamplerWithoutReplacement
from torchrl.data.replay_buffers.storages import LazyTensorStorage
from torchrl.envs import (Compose, DoubleToFloat, ObservationNorm, StepCounter,
                          TransformedEnv)
from torchrl.envs.libs.gym import GymEnv
from torchrl.envs.utils import check_env_specs, set_exploration_mode
from torchrl.modules import ProbabilisticActor, TanhNormal, ValueOperator
from torchrl.objectives import ClipPPOLoss
from torchrl.objectives.value import GAE
from tqdm import tqdm 

定义超参数

我们设置算法的超参数。根据可用资源,可以选择在 GPU 上或在另一设备上执行策略。frame_skip将控制执行单个动作需要多少帧。其余计算帧数的参数必须根据这个值进行校正(因为一个环境步骤实际上会返回frame_skip帧)。

device = "cpu" if not torch.cuda.is_available() else "cuda:0"
num_cells = 256  # number of cells in each layer i.e. output dim.
lr = 3e-4
max_grad_norm = 1.0 

数据收集参数

在收集数据时,我们可以通过定义一个frames_per_batch参数来选择每个批次的大小。我们还将定义我们允许自己使用多少帧(例如与模拟器的交互次数)。一般来说,RL 算法的目标是尽快学会解决任务,以尽可能少的环境交互次数为目标:total_frames越低越好。我们还定义了一个frame_skip:在某些情况下,重复在轨迹过程中多次执行相同动作可能是有益的,因为这会使行为更一致,更少出现异常。然而,“跳过”太多帧会通过降低演员对观察变化的反应性来阻碍训练。

在使用frame_skip时,最好根据我们正在组合在一起的帧数来校正其他帧数。如果我们为训练配置了 X 帧的总数,但使用了 Y 的frame_skip,那么我们实际上将总共收集XY帧,这超出了我们预先定义的预算。

frame_skip = 1
frames_per_batch = 1000 // frame_skip
# For a complete training, bring the number of frames up to 1M
total_frames = 50_000 // frame_skip 

PPO 参数

在每次数据收集(或批量收集)中,我们将在一定数量的epochs上运行优化,每次都会在嵌套的训练循环中消耗我们刚刚获取的所有数据。在这里,sub_batch_size与上面的frames_per_batch不同:请记住,我们正在处理来自我们收集器的“数据批次”,其大小由frames_per_batch定义,并且我们将在内部训练循环中进一步分割为更小的子批次。这些子批次的大小由sub_batch_size控制。

sub_batch_size = 64  # cardinality of the sub-samples gathered from the current data in the inner loop
num_epochs = 10  # optimization steps per batch of data collected
clip_epsilon = (
    0.2  # clip value for PPO loss: see the equation in the intro for more context.
)
gamma = 0.99
lmbda = 0.95
entropy_eps = 1e-4 

定义一个环境

在强化学习中,环境通常是我们指代模拟器或控制系统的方式。各种库提供了用于强化学习的模拟环境,包括 Gymnasium(之前是 OpenAI Gym)、DeepMind Control Suite 等。作为一个通用库,TorchRL 的目标是为大量 RL 模拟器提供可互换的接口,使您可以轻松地将一个环境与另一个环境进行交换。例如,可以用很少的字符创建一个包装的 gym 环境:

base_env = GymEnv("InvertedDoublePendulum-v4", device=device, frame_skip=frame_skip) 

在这段代码中有几点需要注意:首先,我们通过调用 GymEnv 包装器创建了环境。如果传递了额外的关键字参数,它们将传递给 gym.make 方法,因此涵盖了最常见的环境构建命令。或者,也可以直接使用 gym.make(env_name, **kwargs) 创建 gym 环境,并将其包装在 GymWrapper 类中。

还有 device 参数:对于 gym,这只控制输入动作和观察状态将存储在的设备,但执行始终在 CPU 上进行。原因很简单,即 gym 不支持在设备上执行,除非另有说明。对于其他库,我们可以控制执行设备,并且尽可能保持存储和执行后端的一致性。

转换

我们将向我们的环境附加一些转换,以准备数据供策略使用。在 Gym 中,通常通过包装器来实现这一点。TorchRL 采用了一种不同的方法,更类似于其他 pytorch 领域库,通过使用转换。要向环境添加转换,只需将其包装在一个 TransformedEnv 实例中,并将转换序列附加到其中。转换后的环境将继承包装环境的设备和元数据,并根据包含的转换序列进行转换。

归一化

首先要进行编码的是归一化转换。通常情况下,最好让数据大致符合单位高斯分布:为了实现这一点,我们将在环境中运行一定数量的随机步骤,并计算这些观察结果的摘要统计信息。

我们将附加另外两个转换:DoubleToFloat 转换将双精度条目转换为单精度数字,以便策略读取。StepCounter 转换将用于计算环境终止之前的步数。我们将使用这个度量作为性能的补充度量。

正如我们将在后面看到的,TorchRL 的许多类依赖于 TensorDict 进行通信。您可以将其视为带有一些额外张量功能的 python 字典。实际上,这意味着我们将要使用的许多模块需要告诉它们在它们将接收的 tensordict 中读取哪个键(in_keys)和写入哪个键(out_keys)。通常情况下,如果省略了 out_keys,则假定 in_keys 条目将被原地更新。对于我们的转换,我们感兴趣的唯一条目被称为 "observation",我们将告诉我们的转换层修改这个条目,仅限于这个条目:

env = TransformedEnv(
    base_env,
    Compose(
        # normalize observations
        ObservationNorm(in_keys=["observation"]),
        DoubleToFloat(in_keys=["observation"]),
        StepCounter(),
    ),
) 

正如您可能已经注意到的,我们已经创建了一个归一化层,但我们没有设置其归一化参数。为了做到这一点,ObservationNorm 可以自动收集我们环境的摘要统计信息:

env.transform[0].init_stats(num_iter=1000, reduce_dim=0, cat_dim=0) 

ObservationNorm 转换现在已经填充了一个位置和一个比例,将用于归一化数据。

让我们对我们的摘要统计信息的形状进行一些简单的检查:

print("normalization constant shape:", env.transform[0].loc.shape) 
normalization constant shape: torch.Size([11]) 

一个环境不仅由其模拟器和转换定义,还由一系列描述其执行期间可以预期到的元数据定义。出于效率目的,当涉及环境规范时,TorchRL 是相当严格的,但您可以轻松检查您的环境规范是否合适。在我们的示例中,GymWrapper和从中继承的GymEnv已经负责为您的环境设置适当的规范,因此您不必担心这一点。

尽管如此,让我们通过查看其规范来看一个使用我们转换后的环境的具体示例。有三个规范要查看:observation_spec定义了在环境中执行动作时可以预期的内容,reward_spec指示奖励域,最后是input_spec(其中包含action_spec),它代表环境执行单个步骤所需的一切。

print("observation_spec:", env.observation_spec)
print("reward_spec:", env.reward_spec)
print("input_spec:", env.input_spec)
print("action_spec (as defined by input_spec):", env.action_spec) 
observation_spec: CompositeSpec(
    observation: UnboundedContinuousTensorSpec(
        shape=torch.Size([11]),
        space=None,
        device=cuda:0,
        dtype=torch.float32,
        domain=continuous),
    step_count: BoundedTensorSpec(
        shape=torch.Size([1]),
        space=ContinuousBox(
            low=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.int64, contiguous=True),
            high=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.int64, contiguous=True)),
        device=cuda:0,
        dtype=torch.int64,
        domain=continuous), device=cuda:0, shape=torch.Size([]))
reward_spec: UnboundedContinuousTensorSpec(
    shape=torch.Size([1]),
    space=ContinuousBox(
        low=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, contiguous=True),
        high=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, contiguous=True)),
    device=cuda:0,
    dtype=torch.float32,
    domain=continuous)
input_spec: CompositeSpec(
    full_state_spec: CompositeSpec(
        step_count: BoundedTensorSpec(
            shape=torch.Size([1]),
            space=ContinuousBox(
                low=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.int64, contiguous=True),
                high=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.int64, contiguous=True)),
            device=cuda:0,
            dtype=torch.int64,
            domain=continuous), device=cuda:0, shape=torch.Size([])),
    full_action_spec: CompositeSpec(
        action: BoundedTensorSpec(
            shape=torch.Size([1]),
            space=ContinuousBox(
                low=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, contiguous=True),
                high=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, contiguous=True)),
            device=cuda:0,
            dtype=torch.float32,
            domain=continuous), device=cuda:0, shape=torch.Size([])), device=cuda:0, shape=torch.Size([]))
action_spec (as defined by input_spec): BoundedTensorSpec(
    shape=torch.Size([1]),
    space=ContinuousBox(
        low=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, contiguous=True),
        high=Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, contiguous=True)),
    device=cuda:0,
    dtype=torch.float32,
    domain=continuous) 

check_env_specs()函数运行一个小的执行,并将其输出与环境规范进行比较。如果没有引发错误,我们可以确信规范已经正确定义:

check_env_specs(env) 
check_env_specs succeeded! 

为了好玩,让我们看看简单的随机执行是什么样子的。您可以调用 env.rollout(n_steps)并查看环境输入和输出的概况。动作将自动从动作规范域中绘制,因此您无需担心设计随机采样器。

通常,在每一步中,RL 环境接收一个动作作为输入,并输出一个观察、一个奖励和一个完成状态。观察可能是复合的,这意味着它可能由多个张量组成。这对于 TorchRL 来说不是问题,因为所有的观察集合都会自动打包在输出的TensorDict中。在执行一个执行(例如,一系列环境步骤和随机动作生成)一定数量的步骤后,我们将检索到一个形状与此轨迹长度匹配的TensorDict实例:

rollout = env.rollout(3)
print("rollout of three steps:", rollout)
print("Shape of the rollout TensorDict:", rollout.batch_size) 
rollout of three steps: TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.float32, is_shared=True),
        done: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.bool, is_shared=True),
        next: TensorDict(
            fields={
                done: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.bool, is_shared=True),
                observation: Tensor(shape=torch.Size([3, 11]), device=cuda:0, dtype=torch.float32, is_shared=True),
                reward: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.float32, is_shared=True),
                step_count: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.int64, is_shared=True),
                terminated: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.bool, is_shared=True),
                truncated: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.bool, is_shared=True)},
            batch_size=torch.Size([3]),
            device=cuda:0,
            is_shared=True),
        observation: Tensor(shape=torch.Size([3, 11]), device=cuda:0, dtype=torch.float32, is_shared=True),
        step_count: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.int64, is_shared=True),
        terminated: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.bool, is_shared=True),
        truncated: Tensor(shape=torch.Size([3, 1]), device=cuda:0, dtype=torch.bool, is_shared=True)},
    batch_size=torch.Size([3]),
    device=cuda:0,
    is_shared=True)
Shape of the rollout TensorDict: torch.Size([3]) 

我们的执行数据的形状是torch.Size([3]),与我们运行的步数相匹配。"next"条目指向当前步骤之后的数据。在大多数情况下,时间 t 的"next"数据与t+1时刻的数据匹配,但如果我们使用一些特定的转换(例如,多步),这可能不是情况。

策略

PPO 利用随机策略来处理探索。这意味着我们的神经网络将不得不输出一个分布的参数,而不是与采取的动作对应的单个值。

由于数据是连续的,我们使用 Tanh-Normal 分布来尊重动作空间的边界。TorchRL 提供了这样的分布,我们唯一需要关心的是构建一个神经网络,以输出策略所需的正确数量的参数(位置或均值,以及尺度):

f θ ( observation ) = μ θ ( observation ) , σ θ + ( observation ) f_{\theta}(\text{observation}) = \mu_{\theta}(\text{observation}), \sigma^{+}_{\theta}(\text{observation}) fθ(observation)=μθ(observation),σθ+(observation)

这里唯一增加的困难是将我们的输出分成两个相等的部分,并将第二部分映射到严格正空间。

我们分三步设计策略:

  1. 定义一个神经网络D_obs -> 2 * D_action。确实,我们的loc(mu)和scale(sigma)都具有维度D_action

  2. 附加一个NormalParamExtractor来提取位置和尺度(例如,将输入分成两个相等的部分,并对尺度参数应用正变换)。

  3. 创建一个可以生成此分布并从中采样的概率TensorDictModule

actor_net = nn.Sequential(
    nn.LazyLinear(num_cells, device=device),
    nn.Tanh(),
    nn.LazyLinear(num_cells, device=device),
    nn.Tanh(),
    nn.LazyLinear(num_cells, device=device),
    nn.Tanh(),
    nn.LazyLinear(2 * env.action_spec.shape[-1], device=device),
    NormalParamExtractor(),
) 
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/lazy.py:181: UserWarning:

Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 

为了使策略通过tensordict数据载体与环境“交流”,我们将nn.Module包装在TensorDictModule中。这个类将简单地准备好它提供的in_keys,并将输出就地写入注册的out_keys

policy_module = TensorDictModule(
    actor_net, in_keys=["observation"], out_keys=["loc", "scale"]
) 

现在我们需要根据正态分布的位置和尺度构建一个分布。为此,我们指示ProbabilisticActor类根据位置和尺度参数构建一个TanhNormal。我们还提供这个分布的最小值和最大值,这些值是从环境规格中获取的。

in_keys的名称(因此上面的TensorDictModuleout_keys的名称)不能设置为任何一个可能喜欢的值,因为TanhNormal分布构造函数将期望locscale关键字参数。也就是说,ProbabilisticActor还接受Dict[str, str]类型的in_keys,其中键值对指示每个要使用的关键字参数的in_key字符串应该用于。

policy_module = ProbabilisticActor(
    module=policy_module,
    spec=env.action_spec,
    in_keys=["loc", "scale"],
    distribution_class=TanhNormal,
    distribution_kwargs={
        "min": env.action_spec.space.minimum,
        "max": env.action_spec.space.maximum,
    },
    return_log_prob=True,
    # we'll need the log-prob for the numerator of the importance weights
) 

价值网络

价值网络是 PPO 算法的关键组件,尽管在推断时不会使用。这个模块将读取观察结果,并返回对接下来轨迹的折扣回报的估计。这使我们能够通过依赖在训练过程中动态学习的一些效用估计来分期学习。我们的价值网络与策略具有相同的结构,但为简单起见,我们为其分配了自己的一组参数。

value_net = nn.Sequential(
    nn.LazyLinear(num_cells, device=device),
    nn.Tanh(),
    nn.LazyLinear(num_cells, device=device),
    nn.Tanh(),
    nn.LazyLinear(num_cells, device=device),
    nn.Tanh(),
    nn.LazyLinear(1, device=device),
)

value_module = ValueOperator(
    module=value_net,
    in_keys=["observation"],
) 
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/lazy.py:181: UserWarning:

Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment. 

让我们尝试我们的策略和价值模块。正如我们之前所说,使用TensorDictModule使得直接读取环境的输出来运行这些模块成为可能,因为它们知道要读取什么信息以及在哪里写入它:

print("Running policy:", policy_module(env.reset()))
print("Running value:", value_module(env.reset())) 
Running policy: TensorDict(
    fields={
        action: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, is_shared=True),
        done: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.bool, is_shared=True),
        loc: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, is_shared=True),
        observation: Tensor(shape=torch.Size([11]), device=cuda:0, dtype=torch.float32, is_shared=True),
        sample_log_prob: Tensor(shape=torch.Size([]), device=cuda:0, dtype=torch.float32, is_shared=True),
        scale: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, is_shared=True),
        step_count: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.int64, is_shared=True),
        terminated: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.bool, is_shared=True),
        truncated: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.bool, is_shared=True)},
    batch_size=torch.Size([]),
    device=cuda:0,
    is_shared=True)
Running value: TensorDict(
    fields={
        done: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.bool, is_shared=True),
        observation: Tensor(shape=torch.Size([11]), device=cuda:0, dtype=torch.float32, is_shared=True),
        state_value: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.float32, is_shared=True),
        step_count: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.int64, is_shared=True),
        terminated: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.bool, is_shared=True),
        truncated: Tensor(shape=torch.Size([1]), device=cuda:0, dtype=torch.bool, is_shared=True)},
    batch_size=torch.Size([]),
    device=cuda:0,
    is_shared=True) 

数据收集器

TorchRL 提供了一组DataCollector 类。简而言之,这些类执行三个操作:重置环境,根据最新观察计算动作,执行环境中的一步,并重复最后两个步骤,直到环境发出停止信号(或达到完成状态)。

它们允许您控制每次迭代收集多少帧(通过frames_per_batch参数),何时重置环境(通过max_frames_per_traj参数),策略应该在哪个device上执行等。它们还被设计为与批处理和多进程环境高效地配合工作。

最简单的数据收集器是SyncDataCollector:它是一个迭代器,您可以使用它来获取给定长度的数据批次,并且一旦收集到总帧数(total_frames),它将停止。其他数据收集器(MultiSyncDataCollectorMultiaSyncDataCollector)将以同步和异步方式在一组多进程工作者上执行相同的操作。

与之前的策略和环境一样,数据收集器将返回TensorDict实例,其中元素的总数将与frames_per_batch匹配。使用TensorDict将数据传递给训练循环,可以编写数据加载管道,完全不受回滚内容的实际特定性的影响。

collector = SyncDataCollector(
    env,
    policy_module,
    frames_per_batch=frames_per_batch,
    total_frames=total_frames,
    split_trajs=False,
    device=device,
) 

重放缓冲区

重放缓冲区是离策略 RL 算法的常见构建组件。在策略上下文中,每次收集一批数据时都会重新填充重放缓冲区,并且其数据将被重复消耗一定数量的时期。

TorchRL 的重放缓冲区使用一个通用容器ReplayBuffer,它以缓冲区的组件作为参数:存储、写入器、采样器和可能的一些转换。只有存储(指示重放缓冲区容量)是强制性的。我们还指定了一个无重复的采样器,以避免在一个时期内多次采样相同的项目。对于 PPO 来说,使用重放缓冲区并不是强制性的,我们可以简单地从收集的批次中采样子批次,但使用这些类使我们能够以可重复的方式构建内部训练循环。

replay_buffer = ReplayBuffer(
    storage=LazyTensorStorage(frames_per_batch),
    sampler=SamplerWithoutReplacement(),
) 

损失函数

可以直接从 TorchRL 中导入 PPO 损失以方便使用ClipPPOLoss类。这是利用 PPO 的最简单方法:它隐藏了 PPO 的数学运算和相关控制流程。

PPO 需要计算一些“优势估计”。简而言之,优势是反映在处理偏差/方差折衷时对回报值的期望的值。要计算优势,只需(1)构建优势模块,该模块利用我们的值运算符,并且(2)在每个时期之前将每个数据批次通过它传递。GAE 模块将使用新的"advantage""value_target"条目更新输入tensordict"value_target"是一个无梯度的张量,表示值网络应该用输入观察值表示的经验值。这两者将被ClipPPOLoss用于返回策略和值损失。

advantage_module = GAE(
    gamma=gamma, lmbda=lmbda, value_network=value_module, average_gae=True
)

loss_module = ClipPPOLoss(
    actor=policy_module,
    critic=value_module,
    advantage_key="advantage",
    clip_epsilon=clip_epsilon,
    entropy_bonus=bool(entropy_eps),
    entropy_coef=entropy_eps,
    # these keys match by default but we set this for completeness
    value_target_key=advantage_module.value_target_key,
    critic_coef=1.0,
    gamma=0.99,
    loss_critic_type="smooth_l1",
)

optim = torch.optim.Adam(loss_module.parameters(), lr)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(
    optim, total_frames // frames_per_batch, 0.0
) 

训练循环

现在我们已经有了编写训练循环所需的所有要素。步骤包括:

  • 收集数据

    • 计算优势

      • 循环遍历收集的数据以计算损失值

      • 反向传播

      • 优化

      • 重复

    • 重复

  • 重复

logs = defaultdict(list)
pbar = tqdm(total=total_frames * frame_skip)
eval_str = ""

# We iterate over the collector until it reaches the total number of frames it was
# designed to collect:
for i, tensordict_data in enumerate(collector):
    # we now have a batch of data to work with. Let's learn something from it.
    for _ in range(num_epochs):
        # We'll need an "advantage" signal to make PPO work.
        # We re-compute it at each epoch as its value depends on the value
        # network which is updated in the inner loop.
        advantage_module(tensordict_data)
        data_view = tensordict_data.reshape(-1)
        replay_buffer.extend(data_view.cpu())
        for _ in range(frames_per_batch // sub_batch_size):
            subdata = replay_buffer.sample(sub_batch_size)
            loss_vals = loss_module(subdata.to(device))
            loss_value = (
                loss_vals["loss_objective"]
                + loss_vals["loss_critic"]
                + loss_vals["loss_entropy"]
            )

            # Optimization: backward, grad clipping and optimization step
            loss_value.backward()
            # this is not strictly mandatory but it's good practice to keep
            # your gradient norm bounded
            torch.nn.utils.clip_grad_norm_(loss_module.parameters(), max_grad_norm)
            optim.step()
            optim.zero_grad()

    logs["reward"].append(tensordict_data["next", "reward"].mean().item())
    pbar.update(tensordict_data.numel() * frame_skip)
    cum_reward_str = (
        f"average reward={logs['reward'][-1]: 4.4f} (init={logs['reward'][0]: 4.4f})"
    )
    logs["step_count"].append(tensordict_data["step_count"].max().item())
    stepcount_str = f"step count (max): {logs['step_count'][-1]}"
    logs["lr"].append(optim.param_groups[0]["lr"])
    lr_str = f"lr policy: {logs['lr'][-1]: 4.4f}"
    if i % 10 == 0:
        # We evaluate the policy once every 10 batches of data.
        # Evaluation is rather simple: execute the policy without exploration
        # (take the expected value of the action distribution) for a given
        # number of steps (1000, which is our ``env`` horizon).
        # The ``rollout`` method of the ``env`` can take a policy as argument:
        # it will then execute this policy at each step.
        with set_exploration_mode("mean"), torch.no_grad():
            # execute a rollout with the trained policy
            eval_rollout = env.rollout(1000, policy_module)
            logs["eval reward"].append(eval_rollout["next", "reward"].mean().item())
            logs["eval reward (sum)"].append(
                eval_rollout["next", "reward"].sum().item()
            )
            logs["eval step_count"].append(eval_rollout["step_count"].max().item())
            eval_str = (
                f"eval cumulative reward: {logs['eval reward (sum)'][-1]: 4.4f} "
                f"(init: {logs['eval reward (sum)'][0]: 4.4f}), "
                f"eval step-count: {logs['eval step_count'][-1]}"
            )
            del eval_rollout
    pbar.set_description(", ".join([eval_str, cum_reward_str, stepcount_str, lr_str]))

    # We're also using a learning rate scheduler. Like the gradient clipping,
    # this is a nice-to-have but nothing necessary for PPO to work.
    scheduler.step() 
 0%|          | 0/50000 [00:00<?, ?it/s]
  2%|2         | 1000/50000 [00:06<05:18, 153.98it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.0850 (init= 9.0850), step count (max): 16, lr policy:  0.0003:   2%|2         | 1000/50000 [00:06<05:18, 153.98it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.0850 (init= 9.0850), step count (max): 16, lr policy:  0.0003:   4%|4         | 2000/50000 [00:12<04:54, 162.80it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.1122 (init= 9.0850), step count (max): 12, lr policy:  0.0003:   4%|4         | 2000/50000 [00:12<04:54, 162.80it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.1122 (init= 9.0850), step count (max): 12, lr policy:  0.0003:   6%|6         | 3000/50000 [00:18<04:41, 166.90it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.1491 (init= 9.0850), step count (max): 18, lr policy:  0.0003:   6%|6         | 3000/50000 [00:18<04:41, 166.90it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.1491 (init= 9.0850), step count (max): 18, lr policy:  0.0003:   8%|8         | 4000/50000 [00:23<04:31, 169.41it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.1947 (init= 9.0850), step count (max): 24, lr policy:  0.0003:   8%|8         | 4000/50000 [00:23<04:31, 169.41it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.1947 (init= 9.0850), step count (max): 24, lr policy:  0.0003:  10%|#         | 5000/50000 [00:29<04:25, 169.30it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2093 (init= 9.0850), step count (max): 20, lr policy:  0.0003:  10%|#         | 5000/50000 [00:29<04:25, 169.30it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2093 (init= 9.0850), step count (max): 20, lr policy:  0.0003:  12%|#2        | 6000/50000 [00:35<04:17, 171.17it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2281 (init= 9.0850), step count (max): 27, lr policy:  0.0003:  12%|#2        | 6000/50000 [00:35<04:17, 171.17it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2281 (init= 9.0850), step count (max): 27, lr policy:  0.0003:  14%|#4        | 7000/50000 [00:41<04:09, 172.49it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2291 (init= 9.0850), step count (max): 30, lr policy:  0.0003:  14%|#4        | 7000/50000 [00:41<04:09, 172.49it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2291 (init= 9.0850), step count (max): 30, lr policy:  0.0003:  16%|#6        | 8000/50000 [00:46<04:02, 173.50it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2427 (init= 9.0850), step count (max): 39, lr policy:  0.0003:  16%|#6        | 8000/50000 [00:46<04:02, 173.50it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2427 (init= 9.0850), step count (max): 39, lr policy:  0.0003:  18%|#8        | 9000/50000 [00:52<03:55, 174.40it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2471 (init= 9.0850), step count (max): 42, lr policy:  0.0003:  18%|#8        | 9000/50000 [00:52<03:55, 174.40it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2471 (init= 9.0850), step count (max): 42, lr policy:  0.0003:  20%|##        | 10000/50000 [00:58<03:48, 175.06it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2578 (init= 9.0850), step count (max): 54, lr policy:  0.0003:  20%|##        | 10000/50000 [00:58<03:48, 175.06it/s]
eval cumulative reward:  101.1702 (init:  101.1702), eval step-count: 10, average reward= 9.2578 (init= 9.0850), step count (max): 54, lr policy:  0.0003:  22%|##2       | 11000/50000 [01:04<03:44, 173.84it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2618 (init= 9.0850), step count (max): 77, lr policy:  0.0003:  22%|##2       | 11000/50000 [01:04<03:44, 173.84it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2618 (init= 9.0850), step count (max): 77, lr policy:  0.0003:  24%|##4       | 12000/50000 [01:09<03:38, 173.96it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2594 (init= 9.0850), step count (max): 52, lr policy:  0.0003:  24%|##4       | 12000/50000 [01:09<03:38, 173.96it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2594 (init= 9.0850), step count (max): 52, lr policy:  0.0003:  26%|##6       | 13000/50000 [01:15<03:32, 174.41it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2604 (init= 9.0850), step count (max): 40, lr policy:  0.0003:  26%|##6       | 13000/50000 [01:15<03:32, 174.41it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2604 (init= 9.0850), step count (max): 40, lr policy:  0.0003:  28%|##8       | 14000/50000 [01:21<03:25, 174.83it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2739 (init= 9.0850), step count (max): 53, lr policy:  0.0003:  28%|##8       | 14000/50000 [01:21<03:25, 174.83it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2739 (init= 9.0850), step count (max): 53, lr policy:  0.0003:  30%|###       | 15000/50000 [01:26<03:19, 175.04it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2667 (init= 9.0850), step count (max): 49, lr policy:  0.0002:  30%|###       | 15000/50000 [01:26<03:19, 175.04it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2667 (init= 9.0850), step count (max): 49, lr policy:  0.0002:  32%|###2      | 16000/50000 [01:32<03:14, 175.01it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2706 (init= 9.0850), step count (max): 57, lr policy:  0.0002:  32%|###2      | 16000/50000 [01:32<03:14, 175.01it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2706 (init= 9.0850), step count (max): 57, lr policy:  0.0002:  34%|###4      | 17000/50000 [01:38<03:08, 174.87it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2566 (init= 9.0850), step count (max): 58, lr policy:  0.0002:  34%|###4      | 17000/50000 [01:38<03:08, 174.87it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2566 (init= 9.0850), step count (max): 58, lr policy:  0.0002:  36%|###6      | 18000/50000 [01:44<03:04, 173.30it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2628 (init= 9.0850), step count (max): 44, lr policy:  0.0002:  36%|###6      | 18000/50000 [01:44<03:04, 173.30it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2628 (init= 9.0850), step count (max): 44, lr policy:  0.0002:  38%|###8      | 19000/50000 [01:50<02:58, 173.94it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2692 (init= 9.0850), step count (max): 56, lr policy:  0.0002:  38%|###8      | 19000/50000 [01:50<02:58, 173.94it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2692 (init= 9.0850), step count (max): 56, lr policy:  0.0002:  40%|####      | 20000/50000 [01:55<02:52, 174.34it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2711 (init= 9.0850), step count (max): 83, lr policy:  0.0002:  40%|####      | 20000/50000 [01:55<02:52, 174.34it/s]
eval cumulative reward:  184.6869 (init:  101.1702), eval step-count: 19, average reward= 9.2711 (init= 9.0850), step count (max): 83, lr policy:  0.0002:  42%|####2     | 21000/50000 [02:01<02:45, 174.75it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2784 (init= 9.0850), step count (max): 62, lr policy:  0.0002:  42%|####2     | 21000/50000 [02:01<02:45, 174.75it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2784 (init= 9.0850), step count (max): 62, lr policy:  0.0002:  44%|####4     | 22000/50000 [02:07<02:41, 173.74it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2762 (init= 9.0850), step count (max): 60, lr policy:  0.0002:  44%|####4     | 22000/50000 [02:07<02:41, 173.74it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2762 (init= 9.0850), step count (max): 60, lr policy:  0.0002:  46%|####6     | 23000/50000 [02:12<02:34, 174.39it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2779 (init= 9.0850), step count (max): 69, lr policy:  0.0002:  46%|####6     | 23000/50000 [02:12<02:34, 174.39it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2779 (init= 9.0850), step count (max): 69, lr policy:  0.0002:  48%|####8     | 24000/50000 [02:18<02:30, 173.23it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2783 (init= 9.0850), step count (max): 52, lr policy:  0.0002:  48%|####8     | 24000/50000 [02:18<02:30, 173.23it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2783 (init= 9.0850), step count (max): 52, lr policy:  0.0002:  50%|#####     | 25000/50000 [02:24<02:23, 173.93it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2750 (init= 9.0850), step count (max): 50, lr policy:  0.0002:  50%|#####     | 25000/50000 [02:24<02:23, 173.93it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2750 (init= 9.0850), step count (max): 50, lr policy:  0.0002:  52%|#####2    | 26000/50000 [02:30<02:17, 174.39it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2738 (init= 9.0850), step count (max): 76, lr policy:  0.0001:  52%|#####2    | 26000/50000 [02:30<02:17, 174.39it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2738 (init= 9.0850), step count (max): 76, lr policy:  0.0001:  54%|#####4    | 27000/50000 [02:35<02:11, 174.76it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2835 (init= 9.0850), step count (max): 72, lr policy:  0.0001:  54%|#####4    | 27000/50000 [02:35<02:11, 174.76it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2835 (init= 9.0850), step count (max): 72, lr policy:  0.0001:  56%|#####6    | 28000/50000 [02:41<02:05, 174.97it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2823 (init= 9.0850), step count (max): 61, lr policy:  0.0001:  56%|#####6    | 28000/50000 [02:41<02:05, 174.97it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2823 (init= 9.0850), step count (max): 61, lr policy:  0.0001:  58%|#####8    | 29000/50000 [02:47<01:59, 175.14it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2865 (init= 9.0850), step count (max): 60, lr policy:  0.0001:  58%|#####8    | 29000/50000 [02:47<01:59, 175.14it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2865 (init= 9.0850), step count (max): 60, lr policy:  0.0001:  60%|######    | 30000/50000 [02:53<01:55, 173.69it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2899 (init= 9.0850), step count (max): 74, lr policy:  0.0001:  60%|######    | 30000/50000 [02:53<01:55, 173.69it/s]
eval cumulative reward:  277.6396 (init:  101.1702), eval step-count: 29, average reward= 9.2899 (init= 9.0850), step count (max): 74, lr policy:  0.0001:  62%|######2   | 31000/50000 [02:58<01:48, 174.42it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2936 (init= 9.0850), step count (max): 60, lr policy:  0.0001:  62%|######2   | 31000/50000 [02:59<01:48, 174.42it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2936 (init= 9.0850), step count (max): 60, lr policy:  0.0001:  64%|######4   | 32000/50000 [03:04<01:43, 173.40it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2996 (init= 9.0850), step count (max): 80, lr policy:  0.0001:  64%|######4   | 32000/50000 [03:04<01:43, 173.40it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2996 (init= 9.0850), step count (max): 80, lr policy:  0.0001:  66%|######6   | 33000/50000 [03:10<01:37, 174.25it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.3009 (init= 9.0850), step count (max): 93, lr policy:  0.0001:  66%|######6   | 33000/50000 [03:10<01:37, 174.25it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.3009 (init= 9.0850), step count (max): 93, lr policy:  0.0001:  68%|######8   | 34000/50000 [03:16<01:31, 174.77it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2965 (init= 9.0850), step count (max): 81, lr policy:  0.0001:  68%|######8   | 34000/50000 [03:16<01:31, 174.77it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2965 (init= 9.0850), step count (max): 81, lr policy:  0.0001:  70%|#######   | 35000/50000 [03:21<01:25, 175.08it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2899 (init= 9.0850), step count (max): 68, lr policy:  0.0001:  70%|#######   | 35000/50000 [03:21<01:25, 175.08it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2899 (init= 9.0850), step count (max): 68, lr policy:  0.0001:  72%|#######2  | 36000/50000 [03:27<01:19, 175.32it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2915 (init= 9.0850), step count (max): 50, lr policy:  0.0001:  72%|#######2  | 36000/50000 [03:27<01:19, 175.32it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2915 (init= 9.0850), step count (max): 50, lr policy:  0.0001:  74%|#######4  | 37000/50000 [03:33<01:14, 173.74it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2923 (init= 9.0850), step count (max): 115, lr policy:  0.0001:  74%|#######4  | 37000/50000 [03:33<01:14, 173.74it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2923 (init= 9.0850), step count (max): 115, lr policy:  0.0001:  76%|#######6  | 38000/50000 [03:38<01:08, 174.45it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2979 (init= 9.0850), step count (max): 57, lr policy:  0.0000:  76%|#######6  | 38000/50000 [03:38<01:08, 174.45it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2979 (init= 9.0850), step count (max): 57, lr policy:  0.0000:  78%|#######8  | 39000/50000 [03:44<01:02, 174.89it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2898 (init= 9.0850), step count (max): 57, lr policy:  0.0000:  78%|#######8  | 39000/50000 [03:44<01:02, 174.89it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2898 (init= 9.0850), step count (max): 57, lr policy:  0.0000:  80%|########  | 40000/50000 [03:50<00:57, 175.15it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2846 (init= 9.0850), step count (max): 67, lr policy:  0.0000:  80%|########  | 40000/50000 [03:50<00:57, 175.15it/s]
eval cumulative reward:  409.9215 (init:  101.1702), eval step-count: 43, average reward= 9.2846 (init= 9.0850), step count (max): 67, lr policy:  0.0000:  82%|########2 | 41000/50000 [03:56<00:51, 175.55it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2923 (init= 9.0850), step count (max): 76, lr policy:  0.0000:  82%|########2 | 41000/50000 [03:56<00:51, 175.55it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2923 (init= 9.0850), step count (max): 76, lr policy:  0.0000:  84%|########4 | 42000/50000 [04:01<00:45, 174.00it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2962 (init= 9.0850), step count (max): 75, lr policy:  0.0000:  84%|########4 | 42000/50000 [04:01<00:45, 174.00it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2962 (init= 9.0850), step count (max): 75, lr policy:  0.0000:  86%|########6 | 43000/50000 [04:07<00:40, 173.10it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2913 (init= 9.0850), step count (max): 60, lr policy:  0.0000:  86%|########6 | 43000/50000 [04:07<00:40, 173.10it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2913 (init= 9.0850), step count (max): 60, lr policy:  0.0000:  88%|########8 | 44000/50000 [04:13<00:34, 174.17it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2912 (init= 9.0850), step count (max): 108, lr policy:  0.0000:  88%|########8 | 44000/50000 [04:13<00:34, 174.17it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2912 (init= 9.0850), step count (max): 108, lr policy:  0.0000:  90%|######### | 45000/50000 [04:19<00:28, 174.85it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2952 (init= 9.0850), step count (max): 58, lr policy:  0.0000:  90%|######### | 45000/50000 [04:19<00:28, 174.85it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.2952 (init= 9.0850), step count (max): 58, lr policy:  0.0000:  92%|#########2| 46000/50000 [04:24<00:22, 175.34it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3059 (init= 9.0850), step count (max): 125, lr policy:  0.0000:  92%|#########2| 46000/50000 [04:24<00:22, 175.34it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3059 (init= 9.0850), step count (max): 125, lr policy:  0.0000:  94%|#########3| 47000/50000 [04:30<00:17, 175.62it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3046 (init= 9.0850), step count (max): 136, lr policy:  0.0000:  94%|#########3| 47000/50000 [04:30<00:17, 175.62it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3046 (init= 9.0850), step count (max): 136, lr policy:  0.0000:  96%|#########6| 48000/50000 [04:36<00:11, 176.01it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3019 (init= 9.0850), step count (max): 130, lr policy:  0.0000:  96%|#########6| 48000/50000 [04:36<00:11, 176.01it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3019 (init= 9.0850), step count (max): 130, lr policy:  0.0000:  98%|#########8| 49000/50000 [04:41<00:05, 174.60it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3142 (init= 9.0850), step count (max): 156, lr policy:  0.0000:  98%|#########8| 49000/50000 [04:41<00:05, 174.60it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3142 (init= 9.0850), step count (max): 156, lr policy:  0.0000: 100%|##########| 50000/50000 [04:47<00:00, 175.27it/s]
eval cumulative reward:  503.3041 (init:  101.1702), eval step-count: 53, average reward= 9.3095 (init= 9.0850), step count (max): 144, lr policy:  0.0000: 100%|##########| 50000/50000 [04:47<00:00, 175.27it/s] 

结果

在达到 100 万步限制之前,算法应该已经达到了 1000 步的最大步数,这是轨迹被截断之前的最大步数。

plt.figure(figsize=(10, 10))
plt.subplot(2, 2, 1)
plt.plot(logs["reward"])
plt.title("training rewards (average)")
plt.subplot(2, 2, 2)
plt.plot(logs["step_count"])
plt.title("Max step count (training)")
plt.subplot(2, 2, 3)
plt.plot(logs["eval reward (sum)"])
plt.title("Return (test)")
plt.subplot(2, 2, 4)
plt.plot(logs["eval step_count"])
plt.title("Max step count (test)")
plt.show() 

PyTorch 2.2 中文官方教程(七)_第6张图片

结论和下一步

在本教程中,我们学到了:

  1. 如何使用 torchrl 创建和自定义环境;

  2. 如何编写模型和损失函数;

  3. 如何设置典型的训练循环。

如果您想对本教程进行更多实验,可以应用以下修改:

  • 从效率的角度来看,我们可以并行运行多个模拟以加快数据收集速度。查看 ParallelEnv 以获取更多信息。

  • 从记录的角度来看,可以在请求渲染后向环境添加 torchrl.record.VideoRecorder 转换,以获得倒立摆动作的视觉渲染。查看 torchrl.record 以了解更多信息。

脚本的总运行时间:(4 分钟 50.072 秒)

下载 Python 源代码:reinforcement_ppo.py

下载 Jupyter 笔记本:reinforcement_ppo.ipynb

Sphinx-Gallery 生成的图库

你可能感兴趣的:(人工智能,pytorch,人工智能,python)