浅谈keras中loss与val_loss的关系

浅谈keras中loss与val_loss的关系
蒸汽减温减压装置loss函数如何接受输⼊值
keras封装的⽐较厉害,官⽹给的例⼦写的云⾥雾⾥,
在stackoverflow到了
You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function).
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
modelpile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
You can verify that input_tensor and the loss value will change as different X is passed to the model.
X = np.random.rand(1000, 10)
y = np.random.randint(2, size=1000)
X *= 1000
地沟油检测
fit_generator
fit_generator ultimately calls train_on_batch which allows for x to be a dictionary.
Also, it could be a list, in which casex is expected to map 1:1 to the inputs defined in Model(input=[in1, …], …)
### generator
yield [inputX_1,inputX_2],y
### model
model = Model(inputs=[inputX_1,inputX_2],outputs=...)
补充知识:学习keras时对loss函数不同的选择,则model.fit⾥的outputs可以是one_hot向量,也可以是整形标签
我就废话不多说了,⼤家还是直接看代码吧~
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# plt.figure()
# plt.imshow(train_images[0])
# lorbar()
# id(False)
# plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
# plt.figure(figsize=(10,10))
# for i in range(25):
#  plt.subplot(5,5,i+1)
#  icks([])
#  icks([])
#  id(False)
#  plt.imshow(train_images[i], binary)
#  plt.xlabel(class_names[train_labels[i]])主板清洗液
# plt.show()
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),机柜空调器
keras.layers.Dense(10, activation='softmax')
])
modelpile(optimizer='adam',
loss='categorical_crossentropy',
#loss = 'sparse_categorical_crossentropy' 则之后的label不需要变成one_hot向量,直接使⽤整形标签即可
metrics=['accuracy'])
one_hot_train_labels = _categorical(train_labels, num_classes=10)
model.fit(train_images, one_hot_train_labels, epochs=10)
one_hot_test_labels = _categorical(test_labels, num_classes=10)角关联
test_loss, test_acc = model.evaluate(test_images, one_hot_test_labels)
print('\nTest accuracy:', test_acc)
# predictions = model.predict(test_images)
狗皮护膝
# predictions[0]
# np.argmax(predictions[0])
# test_labels[0]
loss若为loss=‘categorical_crossentropy', 则fit中的第⼆个输出必须是⼀个one_hot类型,
⽽若loss为loss = ‘sparse_categorical_crossentropy' 则之后的label不需要变成one_hot向量,直接使⽤整形标签即可
以上这篇浅谈keras中loss与val_loss的关系就是⼩编分享给⼤家的全部内容了,希望能给⼤家⼀个参考,也希望⼤家多多⽀持。

本文发布于:2024-09-22 03:58:09,感谢您对本站的认可!

本文链接:https://www.17tex.com/tex/4/215945.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:函数   希望   变成
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议