我正在测试 tf.keras.losses.CategoricalCrossEntropy
的结果,它给了我与定义不同的 values。我对 cross entropy 的理解是:
def ce_loss_def(y_true, y_pred):
return tf.reduce_sum(-tf.math.multiply(y_true, tf.math.log(y_pred)))
假设我有这样的 values :
pred = [0.1, 0.1, 0.1, 0.7]
target = [0, 0, 0, 1]
pred = tf.constant(pred, dtype = tf.float32)
target = tf.constant(target, dtype = tf.float32)
pred_2 = [0.1, 0.3, 0.1, 0.7]
target = [0, 0, 0, 1]
pred_2 = tf.constant(pred_2, dtype = tf.float32)
target = tf.constant(target, dtype = tf.float32)
根据定义,我认为它应该忽略非目标类中的概率,如下所示:
ce_loss_def(y_true = target, y_pred = pred), ce_loss_def(y_true = target, y_pred = pred_2)
(<tf.Tensor: shape=(), dtype=float32, numpy=0.35667497>,
<tf.Tensor: shape=(), dtype=float32, numpy=0.35667497>)
但是 tf.keras.losses.CategoricalCrossEntropy
并没有给我相同的结果:
ce_loss_keras = tf.keras.losses.CategoricalCrossentropy()
ce_loss_keras(y_true = target, y_pred = pred), ce_loss_keras(y_true = target, y_pred = pred_2)
输出:
(<tf.Tensor: shape=(), dtype=float32, numpy=0.35667497>,
<tf.Tensor: shape=(), dtype=float32, numpy=0.5389965>)
我错过了什么?
这是我用来获得此结果的笔记本的链接:https://colab.research.google.com/drive/1T69vn7MCGMSQ8hlRkyve6_EPxIZC1IKb#scrollTo=dHZruq-PGyzO
回答1
我发现了问题所在。矢量元素以某种方式自动缩放,总和为 1,因为 values 是概率。
import tensorflow as tf
ce_loss = tf.keras.losses.CategoricalCrossentropy()
pred = [0.05, 0.2, 0.25, 0.5]
target = [0, 0, 0, 1]
pred = tf.constant(pred, dtype = tf.float32)
target = tf.constant(target, dtype = tf.float32)
pred_2 = [0.1, 0.3, 0.1, 0.5] # pred_2 has P(class2) = 0.3, instead of P(class2) = 0.1.
target = [0, 0, 0, 1]
pred_2 = tf.constant(pred_2, dtype = tf.float32)
target = tf.constant(target, dtype = tf.float32)
c1, c2 = ce_loss(y_true = target, y_pred = pred), ce_loss(y_true = target, y_pred = pred_2)
print("CE loss at dafault value: {}. CE loss with different probability of non-target classes:{}".format(c1,c2))
给
CE loss at default value: 0.6931471824645996.
CE loss with with different probability of non-target classes:0.6931471824645996
如预期。