您好,欢迎来到飒榕旅游知识分享网。
搜索
您的当前位置:首页TensorFlow——Checkpoint为模型添加检查点

TensorFlow——Checkpoint为模型添加检查点

来源:飒榕旅游知识分享网
TensorFlow——Checkpoint为模型添加检查点

1.检查点

保存模型并不限于在训练模型后,在训练模型之中也需要保存,因为TensorFlow训练模型时难免会出现中断的情况,我们⾃然希望能够将训练得到的参数保存下来,否则下次⼜要重新训练。

这种在训练中保存模型,习惯上称之为保存检查点。

2.添加保存点

通过添加检查点,可以⽣成载⼊检查点⽂件,并能够指定⽣成检查⽂件的个数,例如使⽤saver的另⼀个参数——max_to_keep=1,表明最多只保存⼀个检查点⽂件,在保存时使⽤如下的代码传⼊迭代次数。

import tensorflow as tfimport numpy as np

import matplotlib.pyplot as pltimport os

train_x = np.linspace(-5, 3, 50)

train_y = train_x * 5 + 10 + np.random.random(50) * 10 - 5plt.plot(train_x, train_y, 'r.')plt.grid(True)plt.show()

tf.reset_default_graph()

X = tf.placeholder(dtype=tf.float32)Y = tf.placeholder(dtype=tf.float32)

w = tf.Variable(tf.random.truncated_normal([1]), name='Weight')b = tf.Variable(tf.random.truncated_normal([1]), name='bias')z = tf.multiply(X, w) + b

cost = tf.reduce_mean(tf.square(Y - z))learning_rate = 0.01

optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)init = tf.global_variables_initializer()training_epochs = 20display_step = 2

saver = tf.train.Saver(max_to_keep=15)savedir = \"model/\"

if __name__ == '__main__': with tf.Session() as sess: sess.run(init) loss_list = []

for epoch in range(training_epochs): for (x, y) in zip(train_x, train_y):

sess.run(optimizer, feed_dict={X: x, Y: y}) if epoch % display_step == 0:

loss = sess.run(cost, feed_dict={X: x, Y: y}) loss_list.append(loss)

print('Iter: ', epoch, ' Loss: ', loss)

w_, b_ = sess.run([w, b], feed_dict={X: x, Y: y})

saver.save(sess, savedir + \"linear.cpkt\", global_step=epoch) print(\" Finished \")

print(\"W: \", w_, \" b: \", b_, \" loss: \", loss)

plt.plot(train_x, train_x * w_ + b_, 'g-', train_x, train_y, 'r.') plt.grid(True) plt.show() load_epoch = 10

with tf.Session() as sess2:

sess2.run(tf.global_variables_initializer())

saver.restore(sess2, savedir + \"linear.cpkt-\" + str(load_epoch)) print(sess2.run([w, b], feed_dict={X: train_x, Y: train_y}))

在上述的代码中,我们使⽤saver.save(sess, savedir + \"linear.cpkt\将训练的参数传⼊检查点进⾏保存,saver = tf.train.Saver(max_to_keep=1)表⽰只保存⼀个⽂件,这样在训练过程中得到的新的模型就会覆盖以前的模型。

cpkt = tf.train.get_checkpoint_state(savedir)if cpkt and cpkt.model_checkpoint_path:

  saver.restore(sess2, cpkt.model_checkpoint_path)kpt = tf.train.latest_checkpoint(savedir)saver.restore(sess2, kpt)

上述的两种⽅法也可以对checkpoint⽂件进⾏加载,tf.train.latest_checkpoint(savedir)为加载最后的检查点⽂件。这种⽅式,我们可以通过保存指定训练次数的检查点,⽐如保存5的倍数次保存⼀下检查点。

3.简便保存检查点

我们还可以⽤更加简单的⽅法进⾏检查点的保存,tf.train.MonitoredTrainingSession()函数,该函数可以直接实现保存载⼊检查点模型的⽂件,与前⾯的⽅法不同的是,它是按照训练时间来保存检查点的,可以通过指定save_checkpoint_secs参数的具体秒数,设置多久保存⼀次检查点。

import tensorflow as tfimport numpy as np

import matplotlib.pyplot as pltimport os

train_x = np.linspace(-5, 3, 50)

train_y = train_x * 5 + 10 + np.random.random(50) * 10 - 5# plt.plot(train_x, train_y, 'r.')# plt.grid(True)# plt.show()

tf.reset_default_graph()

X = tf.placeholder(dtype=tf.float32)Y = tf.placeholder(dtype=tf.float32)

w = tf.Variable(tf.random.truncated_normal([1]), name='Weight')b = tf.Variable(tf.random.truncated_normal([1]), name='bias')z = tf.multiply(X, w) + b

cost = tf.reduce_mean(tf.square(Y - z))learning_rate = 0.01

optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)init = tf.global_variables_initializer()training_epochs = 30display_step = 2

global_step = tf.train.get_or_create_global_step()step = tf.assign_add(global_step, 1)saver = tf.train.Saver()savedir = \"check-point/\"

if __name__ == '__main__':

with tf.train.MonitoredTrainingSession(checkpoint_dir=savedir + 'linear.cpkt', save_checkpoint_secs=5) as sess: sess.run(init) loss_list = []

for epoch in range(training_epochs): sess.run(global_step)

for (x, y) in zip(train_x, train_y):

sess.run(optimizer, feed_dict={X: x, Y: y}) if epoch % display_step == 0:

loss = sess.run(cost, feed_dict={X: x, Y: y}) loss_list.append(loss)

print('Iter: ', epoch, ' Loss: ', loss)

w_, b_ = sess.run([w, b], feed_dict={X: x, Y: y}) sess.run(step)

print(\" Finished \")

print(\"W: \", w_, \" b: \", b_, \" loss: \", loss)

plt.plot(train_x, train_x * w_ + b_, 'g-', train_x, train_y, 'r.') plt.grid(True) plt.show() load_epoch = 10

with tf.Session() as sess2:

sess2.run(tf.global_variables_initializer())

# saver.restore(sess2, savedir + 'linear.cpkt-' + str(load_epoch)) # cpkt = tf.train.get_checkpoint_state(savedir) # if cpkt and cpkt.model_checkpoint_path:

# saver.restore(sess2, cpkt.model_checkpoint_path) #

kpt = tf.train.latest_checkpoint(savedir + 'linear.cpkt') saver.restore(sess2, kpt)

print(sess2.run([w, b], feed_dict={X: train_x, Y: train_y}))

上述的代码中,我们设置了没训练了5秒中之后,就保存⼀次检查点,它默认的保存时间间隔是10分钟,这种按照时间的保存模式更适合使⽤⼤型数据集训练复杂模型的情况,注意在使⽤上述的⽅法时,要定义global_step变量,在训练完⼀个批次或者⼀个样本之后,要将其进⾏加1的操作,否则将会报错。

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- sarr.cn 版权所有

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务