zl程序教程

您现在的位置是:首页 >  Python

当前栏目

# Pytorch 中可以直接调用的Loss Functions总结:(一)

2023-03-14 22:44:01 时间

Pytorch 中可以直接调用的Loss Functions总结:


这里,我们想对Pytorch中可以直接调用的Loss Functions做一个简单的梳理,对于每个Loss Functions,标记了它的使用方法,并对一些不那么常见的Loss FunctionsLink了一些介绍它的Blogs,方便我们学习与使用这些Loss Functions。


L1Loss


用于测量输入中每个元素之间的平均绝对误差 (MAE)。


Creates a criterion that measures the mean absolute error (MAE) between each element in the input x* and target y.


torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')

参数:


size_average (bool, optional) – Deprecated (see ). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field is set to , the losses are instead summed for each minibatch. Ignored when is . Default: reduction``size_average``False``reduce``False``True

reduce (bool, optional) – Deprecated (see ). By default, the losses are averaged or summed over observations for each minibatch depending on . When is , returns a loss per batch element instead and ignores . Default: reduction``size_average``reduce``False``size_average``True

reduction (string*,* optional) – Specifies the reduction to apply to the output: | | . : no reduction will be applied, : the sum of the output will be divided by the number of elements in the output, : the output will be summed. Note: and are in the process of being deprecated, and in the meantime, specifying either of those two args will override . Default: 'none'``'mean'``'sum'``'none'``'mean'``'sum'``size_average``reduce``reduction``'mean'


使用:

>>> loss = nn.L1Loss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward()


MSELoss


用于测量输入中每个元素之间的均方误差(L2 范数)

torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')

参数:


size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

reduction (string*,* optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'


使用:

loss = nn.MSELoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5)
output = loss(input, target)
output.backward()


CROSSENTROPYLOSS


此标准计算输入和目标之间的交叉熵损失

torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)

The input is expected to contain raw, unnormalized scores for each class. input has to be a Tensor of size ©(C) for unbatched input,(minibatc**h,C) or (minibatch, C, d_1, d_2, …, d_K)(minibatc**h,C,d1,d2,…,d**K) with K geq 1K≥1 for the K-dimensional case. The last being useful for higher dimension inputs, such as computing cross entropy loss per-pixel for 2D images.


参数:


weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C

size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Note that ignore_index is only applicable when the target contains class indices.

reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

reduction (string*,* optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'

label_smoothing (float, optional) – A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets become a mixture of the original ground truth and a uniform distribution as described in Rethinking the Inception Architecture for Computer Vision. Default: 0.00.0.


使用:

# Example of target with class indices
loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
output = loss(input, target)
output.backward()
# Example of target with class probabilities
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5).softmax(dim=1)
output = loss(input, target)
output.backward()