MarginRankingLoss¶
- class torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')[source]¶
Creates a criterion that measures the loss given inputs , , two 1D mini-batch or 0D Tensors, and a label 1D mini-batch or 0D Tensor (containing 1 or -1).
If then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for .
The loss function for each pair of samples in the mini-batch is:
- Parameters
margin (float, optional) – Has a default value of .
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (str, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input1: or where N is the batch size.
Input2: or , same shape as the Input1.
Target: or , same shape as the inputs.
Output: scalar. If
reduction
is'none'
and Input size is not , then .
Examples:
>>> loss = nn.MarginRankingLoss() >>> input1 = torch.randn(3, requires_grad=True) >>> input2 = torch.randn(3, requires_grad=True) >>> target = torch.randn(3).sign() >>> output = loss(input1, input2, target) >>> output.backward()