TransformerEncoderLayer¶
-
class
torch.nn.
TransformerEncoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=<function relu>, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)[source]¶ TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
- Parameters
d_model – the number of expected features in the input (required).
nhead – the number of heads in the multiheadattention models (required).
dim_feedforward – the dimension of the feedforward network model (default=2048).
dropout – the dropout value (default=0.1).
activation – the activation function of the intermediate layer, can be a string (“relu” or “gelu”) or a unary callable. Default: relu
layer_norm_eps – the eps value in layer normalization components (default=1e-5).
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
(seq, batch, feature).norm_first – if
True
, layer norm is done prior to attention and feedforward operations, respectivaly. Otherwise it’s done after. Default:False
(after).
- Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> src = torch.rand(10, 32, 512) >>> out = encoder_layer(src)
- Alternatively, when
batch_first
isTrue
: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True) >>> src = torch.rand(32, 10, 512) >>> out = encoder_layer(src)
- Fast path:
forward() will use a special optimized implementation if all of the following conditions are met:
Either autograd is disabled (using
torch.inference_mode
ortorch.no_grad
) or no tensor argumentrequires_grad
training is disabled (using
.eval()
)batch_first is
True
and the input is batched (i.e.,src.dim() == 3
)norm_first is
False
(this restriction may be loosened in the future)activation is one of:
"relu"
,"gelu"
,torch.functional.relu
, ortorch.functional.gelu
at most one of
src_mask
andsrc_key_padding_mask
is passedif src is a NestedTensor, neither
src_mask
norsrc_key_padding_mask
is passedthe two
LayerNorm
instances have a consistenteps
value (this will naturally be the case unless the caller has manually modified one without modifying the other)
If the optimized implementation is in use, a NestedTensor can be passed for
src
to represent padding more efficiently than using a padding mask. In this case, a NestedTensor will be returned, and an additional speedup proportional to the fraction of the input that is padding can be expected.
-
forward
(src, src_mask=None, src_key_padding_mask=None)[source]¶ Pass the input through the encoder layer.
- Parameters
src – the sequence to the encoder layer (required).
src_mask – the mask for the src sequence (optional).
src_key_padding_mask – the mask for the src keys per batch (optional).
- Shape:
see the docs in Transformer class.