【tensorflow2.0】构建模型的三种方法

【tensorflow2.0】构建模型的三种⽅法
可以使⽤以下3种⽅式构建模型:使⽤Sequential按层顺序构建模型,使⽤函数式API构建任意结构模型,继承Model基类构建⾃定义模型。对于顺序结构的模型,优先使⽤Sequential⽅法构建。
如果模型有多输⼊或者多输出,或者模型需要共享权重,或者模型具有残差连接等⾮顺序结构,推荐使⽤函数式API进⾏创建。
如果⽆特定必要,尽可能避免使⽤Model⼦类化的⽅式构建模型,这种⽅式提供了极⼤的灵活性,但也有更⼤的概率出错。
下⾯以IMDB电影评论的分类问题为例,演⽰3种创建模型的⽅法。
import numpy as np
潜意识音乐疗法import pandas as pd
import tensorflow as tf
from tqdm import tqdm
from tensorflow.keras import *
train_token_path = "./data/imdb/train_token.csv"
test_token_path = "./data/imdb/test_token.csv"
MAX_WORDS = 10000  # We will only consider the top 10,000 words in the dataset
MAX_LEN = 200  # We will cut reviews after 200 words
BATCH_SIZE = 20
# 构建管道
def parse_line(line):
t = tf.strings.split(line,"\t")
label = tf.reshape(tf.cast(_number(t[0]),tf.int32),(-1,))
features = tf.cast(_number(tf.strings.split(t[1],"")),tf.int32)
return (features,label)
ds_train=  tf.data.TextLineDataset(filenames = [train_token_path]) \
.map(parse_line,num_parallel_calls = perimental.AUTOTUNE) \
.shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
.prefetch(perimental.AUTOTUNE)
ds_test=  tf.data.TextLineDataset(filenames = [test_token_path]) \
.map(parse_line,num_parallel_calls = perimental.AUTOTUNE) \
.shuffle(buffer_size = 1000).batch(BATCH_SIZE) \
.prefetch(perimental.AUTOTUNE)
⼀,Sequential按层顺序创建模型
f.keras.backend.clear_session()
model = models.Sequential()
model.add(layers.Embedding(MAX_WORDS,7,input_length=MAX_LEN))
model.add(layers.Conv1D(filters = 64,kernel_size = 5,activation = "relu"))
model.add(layers.MaxPool1D(2))汽车节能补贴2014
model.add(layers.Conv1D(filters = 32,kernel_size = 3,activation = "relu"))
model.add(layers.MaxPool1D(2))
model.add(layers.Flatten())
model.add(layers.Dense(1,activation = "sigmoid"))
modelpile(optimizer='Nadam',
loss='binary_crossentropy',
metrics=['accuracy',"AUC"])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                Output Shape              Param #
=================================================================
embedding (Embedding)        (None, 200, 7)            70000
_________________________________________________________________
conv1d (Conv1D)              (None, 196, 64)          2304
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 98, 64)            0
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 96, 32)            6176
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 48, 32)            0
_________________________________________________________________
flatten (Flatten)            (None, 1536)              0
_________________________________________________________________
dense (Dense)                (None, 1)                1537
=================================================================
Total params: 80,017
Trainable params: 80,017
中央经线Non-trainable params: 0
_________________________________________________________________
import datetime
baselogger = callbacks.BaseLogger(stateful_metrics=["auc"])
logdir = "./data/keras_model/" + w().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
history = model.fit(ds_train,validation_data = ds_test,
epochs = 6,callbacks=[baselogger,tensorboard_callback])
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
def plot_metric(history, metric):
train_metrics = history.history[metric]
val_metrics = history.history['val_'+metric]
epochs = range(1, len(train_metrics) + 1)
plt.plot(epochs, train_metrics, 'bo--')
plt.plot(epochs, val_metrics, 'ro-')
plt.title('Training and validation '+ metric)
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend(["train_"+metric, 'val_'+metric])
plt.show()
plot_metric(history,"auc")
这⾥不能成功运⾏。。。,错误如下:
Epoch 1/6
山西中医学院图书馆
1000/Unknown - 17s 17ms/step - loss: 0.1133 - accuracy: 0.9588 - auc: 0.9918
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-17-8cd49fdfb6d8> in <module>()
4 tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
5 history = model.fit(ds_train,validation_data = ds_test,
----> 6        epochs = 6,callbacks=[baselogger,tensorboard_callback])
7 """
8 %matplotlib inline
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
795  def on_epoch_end(self, epoch, logs=None):
796    if logs is not None:
--> 797      for k in self.params['metrics']:
798        if k als:
799          # Make value available to next callbacks.
KeyError: 'metrics'
只好先换成这样的:
import datetime
logdir = "./data/keras_model/" + w().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
history = model.fit(ds_train,validation_data = ds_test,epochs = 6,callbacks=[tensorboard_callback])
然后是结果:
Epoch 1/6
1000/1000 [==============================] - 44s 44ms/step - loss: 0.0058 - accuracy: 0.9980 - auc: 0.9999 - val_loss: 1.5239 - val_accuracy: 0.8598 - val_auc: 0.8961 Epoch 2/6
1000/1000 [==============================] - 44s 44ms/step - loss: 0.0011 - accuracy: 0.9996 - auc: 1.0000 - val_loss: 1.7804 - val_accuracy: 0.8610 - val_auc: 0.8920 Epoch 3/6
1000/1000 [==============================] - 44s 44ms/step - loss: 0.0034 - accuracy: 0.9990 - auc: 0.9999 - val_loss: 1.8452 - val_accuracy: 0.8524 - val_auc: 0.8861 Epoch 4/6
1000/1000 [==============================] - 43s 43ms/step - loss: 0.0107 - accuracy: 0.9969 - auc: 0.9995 - val_loss: 1.6515 - val_accuracy: 0.8582 - val_auc: 0.8901 Epoch 5/6
1000/1000 [==============================] - 44s 44ms/step - loss: 0.0022 - accuracy: 0.9994 - auc: 1.0000 - val_loss: 1.7680 - val_accuracy: 0.8522 - val_auc: 0.8864 Epoch 6/6
1000/1000 [==============================] - 44s 44ms/step - loss: 0.0052 - accuracy: 0.9979 - auc: 0.9999 - val_loss: 1.7506 - val_accuracy: 0.8554 - val_auc: 0.8918
⼆,函数式API创建任意结构模型
tf.keras.backend.clear_session()
inputs = layers.Input(shape=[MAX_LEN])
x  = layers.Embedding(MAX_WORDS,7)(inputs)
branch1 = layers.SeparableConv1D(64,3,activation="relu")(x)
branch1 = layers.MaxPool1D(3)(branch1)
branch1 = layers.SeparableConv1D(32,3,activation="relu")(branch1)
branch1 = layers.GlobalMaxPool1D()(branch1)
branch2 = layers.SeparableConv1D(64,5,activation="relu")(x)
branch2 = layers.MaxPool1D(5)(branch2)
branch2 = layers.SeparableConv1D(32,5,activation="relu")(branch2)
branch2 = layers.GlobalMaxPool1D()(branch2)
branch3 = layers.SeparableConv1D(64,7,activation="relu")(x)
branch3 = layers.MaxPool1D(7)(branch3)
branch3 = layers.SeparableConv1D(32,7,activation="relu")(branch3)
branch3 = layers.GlobalMaxPool1D()(branch3)
concat = layers.Concatenate()([branch1,branch2,branch3])
outputs = layers.Dense(1,activation = "sigmoid")(concat)
model = models.Model(inputs = inputs,outputs = outputs)
modelpile(optimizer='Nadam',
loss='binary_crossentropy',
metrics=['accuracy',"AUC"])
model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape        Param #    Connected to
==================================================================================================
input_1 (InputLayer)            [(None, 200)]        0
__________________________________________________________________________________________________
embedding (Embedding)          (None, 200, 7)      70000      input_1[0][0]
__________________________________________________________________________________________________
separable_conv1d (SeparableConv (None, 198, 64)      533        embedding[0][0]
__________________________________________________________________________________________________
separable_conv1d_2 (SeparableCo (None, 196, 64)      547        embedding[0][0]
__________________________________________________________________________________________________
separable_conv1d_4 (SeparableCo (None, 194, 64)      561        embedding[0][0]
__________________________________________________________________________________________________
max_pooling1d (MaxPooling1D)    (None, 66, 64)      0          separable_conv1d[0][0]
__________________________________________________________________________________________________
max_pooling1d_1 (MaxPooling1D)  (None, 39, 64)      0          separable_conv1d_2[0][0]
__________________________________________________________________________________________________
max_pooling1d_2 (MaxPooling1D)  (None, 27, 64)      0          separable_conv1d_4[0][0]
__________________________________________________________________________________________________
separable_conv1d_1 (SeparableCo (None, 64, 32)      2272        max_pooling1d[0][0]
__________________________________________________________________________________________________
separable_conv1d_3 (SeparableCo (None, 35, 32)      2400        max_pooling1d_1[0][0]
__________________________________________________________________________________________________
separable_conv1d_5 (SeparableCo (None, 21, 32)      2528        max_pooling1d_2[0][0]
__________________________________________________________________________________________________
global_max_pooling1d (GlobalMax (None, 32)          0          separable_conv1d_1[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_1 (GlobalM (None, 32)          0          separable_conv1d_3[0][0]
弗吉尼亚伍尔芙
__________________________________________________________________________________________________
global_max_pooling1d_2 (GlobalM (None, 32)          0          separable_conv1d_5[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate)      (None, 96)          0          global_max_pooling1d[0][0]
global_max_pooling1d_1[0][0]
global_max_pooling1d_2[0][0]
__________________________________________________________________________________________________
dense (Dense)                  (None, 1)            97          concatenate[0][0]
==================================================================================================
Total params: 78,938
Trainable params: 78,938
Non-trainable params: 0
__________________________________________________________________________________________________
import datetime
logdir = "./data/keras_model/" + w().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
history = model.fit(ds_train,validation_data = ds_test,epochs = 6,callbacks=[tensorboard_callback])
Epoch 1/6
1000/1000 [==============================] - 28s 28ms/step - loss: 0.5210 - accuracy: 0.7120 - auc: 0.8098 - val_loss: 0.3512 - val_accuracy: 0.8482 - val_auc: 0.9254 Epoch 2/6
1000/1000 [==============================] - 27s 27ms/step - loss: 0.2842 - accuracy: 0.8805 - auc: 0.9510 - val_loss: 0.3302 - val_accuracy: 0.8588 - val_auc: 0.9384 Epoch 3/6
1000/1000 [==============================] - 27s 27ms/step - loss: 0.1931 - accuracy: 0.92
65 - auc: 0.9772 - val_loss: 0.3955 - val_accuracy: 0.8512 - val_auc: 0.9336 Epoch 4/6
1000/1000 [==============================] - 27s 27ms/step - loss: 0.1203 - accuracy: 0.9594 - auc: 0.9906 - val_loss: 0.4669 - val_accuracy: 0.8494 - val_auc: 0.9273 Epoch 5/6
1000/1000 [==============================] - 27s 27ms/step - loss: 0.0664 - accuracy: 0.9798 - auc: 0.9965 - val_loss: 0.5963 - val_accuracy: 0.8476 - val_auc: 0.9158 Epoch 6/6
1000/1000 [==============================] - 27s 27ms/step - loss: 0.0305 - accuracy: 0.9934 - auc: 0.9987 - val_loss: 0.7246 - val_accuracy: 0.8440 - val_auc: 0.9063 plot_metric(history,"auc")
三,Model⼦类化创建⾃定义模型
# 先⾃定义⼀个残差模块,为⾃定义Layer
class ResBlock(layers.Layer):
def__init__(self, kernel_size, **kwargs):
super(ResBlock, self).__init__(**kwargs)
self.kernel_size = kernel_size
def build(self,input_shape):
activation = "relu",padding="same")
activation = "relu",padding="same")
kernel_size=self.kernel_size,activation = "relu",padding="same")
self.maxpool = layers.MaxPool1D(2)
super(ResBlock,self).build(input_shape) # 相当于设置self.built = True
def call(self, inputs):
x = v1(inputs)
x = v2(x)
x = v3(x)
x = layers.Add()([inputs,x])
x = self.maxpool(x)
return x
#如果要让⾃定义的Layer通过Functional API 组合成模型时可以序列化,需要⾃定义get_config⽅法。def get_config(self):
config = super(ResBlock, self).get_config()
config.update({'kernel_size': self.kernel_size})
return config
# 测试ResBlock
resblock = ResBlock(kernel_size = 3)
resblock.build(input_shape = (None,200,7))
resblockpute_output_shape(input_shape=(None,200,7))
# ⾃定义模型,实际上也可以使⽤Sequential或者FunctionalAPI
class ImdbModel(models.Model):
def__init__(self):
super(ImdbModel, self).__init__()
def build(self,input_shape):
self.block1 = ResBlock(7)
self.block2 = ResBlock(5)
self.dense = layers.Dense(1,activation = "sigmoid")
super(ImdbModel,self).build(input_shape)
def call(self, x):
x = bedding(x)
x = self.block1(x)
x = self.block2(x)
x = layers.Flatten()(x)
x = self.dense(x)
return(x)
tf.keras.backend.clear_session()
model = ImdbModel()
model.build(input_shape =(None,200))
model.summary()
张家口北新村小学
modelpile(optimizer='Nadam',
loss='binary_crossentropy',
metrics=['accuracy',"AUC"])
import datetime
logdir = "./tflogs/keras_model/" + w().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
history = model.fit(ds_train,validation_data = ds_test,
epochs = 6,callbacks=[tensorboard_callback])
plot_metric(history,"auc")
odel: "imdb_model"
_________________________________________________________________
Layer (type)                Output Shape              Param #
================================================================= embedding (Embedding)        multiple                  70000
_________________________________________________________________
res_block (ResBlock)        multiple                  19143
_________________________________________________________________
res_block_1 (ResBlock)      multiple                  13703
_________________________________________________________________
dense (Dense)                multiple                  351
=================================================================
Total params: 103,197
Trainable params: 103,197
Non-trainable params: 0
_________________________________________________________________
Epoch 1/6
1000/1000 [==============================] - 44s 44ms/step - loss: 0.5311 - accuracy: 0.6953 - auc: 0.7931 - val_loss: 0.3333 - val_accuracy: 0.8522 - val_auc: 0.9352 Epoch 2/6
1000/1000 [==============================] - 43s 43ms/step - loss: 0.2507 - accuracy: 0.8985 - auc: 0.9619 - val_loss: 0.3906 - val_accuracy: 0.8404 - val_auc: 0.9427 Epoch 3/6
1000/1000 [==============================] - 43s 43ms/step - loss: 0.1448 - accuracy: 0.9465 - auc: 0.9868 - val_loss: 0.3965 - val_accuracy: 0.8742 - val_auc: 0.9403 Epoch 4/6
1000/1000 [==============================] - 43s 43ms/step - loss: 0.0758 - accuracy: 0.9745 - auc: 0.9958 - val_loss: 0.5496 - val_accuracy: 0.8648 - val_auc: 0.9279 Epoch 5/6
1000/1000 [==============================] - 43s 43ms/step - loss: 0.0296 - accuracy: 0.9898 - auc: 0.9990 - val_loss: 0.8675 - val_accuracy: 0.8592 - val_auc: 0.9111 Epoch 6/6
1000/1000 [==============================] - 43s 43ms/step - loss: 0.0208 - accuracy: 0.9927 - auc: 0.9995 - val_loss: 0.9153 - val_accuracy: 0.8578 - val_auc: 0.9094参考:

本文发布于:2024-09-23 06:26:55,感谢您对本站的认可!

本文链接:https://www.17tex.com/xueshu/453312.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:模型   构建   结构
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议