法一:
循环打印
模板
for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, y
实例
# coding=utf-8 import tensorflow as tf def func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUTO_REUSE): bn = tf.contrib.layers.batch_norm(inputs=in_put, decay=0.9, is_training=is_training, updates_collections=None) return bn def main(): with tf.Graph().as_default(): # input_x input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) import numpy as np i_p = np.random.uniform(low=0, high=255, size=[1, 4, 4, 1]) # outputs output = func(input_x, 'my', is_training=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) t = sess.run(output, feed_dict={input_x:i_p}) # 法一: 循环打印 for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, y if __name__ == "__main__": main()
2017-09-29 10:10:22.714213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) <tf.Variable 'my/BatchNorm/beta:0' shape=(1,) dtype=float32_ref> [ 0.] <tf.Variable 'my/BatchNorm/moving_mean:0' shape=(1,) dtype=float32_ref> [ 13.46412563] <tf.Variable 'my/BatchNorm/moving_variance:0' shape=(1,) dtype=float32_ref> [ 452.62246704] Process finished with exit code 0
法二:
指定变量名打印
模板
print 'my/BatchNorm/beta:0', (sess.run('my/BatchNorm/beta:0'))
实例
# coding=utf-8 import tensorflow as tf def func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUTO_REUSE): bn = tf.contrib.layers.batch_norm(inputs=in_put, decay=0.9, is_training=is_training, updates_collections=None) return bn def main(): with tf.Graph().as_default(): # input_x input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) import numpy as np i_p = np.random.uniform(low=0, high=255, size=[1, 4, 4, 1]) # outputs output = func(input_x, 'my', is_training=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) t = sess.run(output, feed_dict={input_x:i_p}) # 法二: 指定变量名打印 print 'my/BatchNorm/beta:0', (sess.run('my/BatchNorm/beta:0')) print 'my/BatchNorm/moving_mean:0', (sess.run('my/BatchNorm/moving_mean:0')) print 'my/BatchNorm/moving_variance:0', (sess.run('my/BatchNorm/moving_variance:0')) if __name__ == "__main__": main()
2017-09-29 10:12:41.374055: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) my/BatchNorm/beta:0 [ 0.] my/BatchNorm/moving_mean:0 [ 8.08649635] my/BatchNorm/moving_variance:0 [ 368.03442383] Process finished with exit code 0
以上这篇tensorflow 打印内存中的变量方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
华山资源网 Design By www.eoogi.com
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
华山资源网 Design By www.eoogi.com
暂无评论...
更新日志
2024年11月20日
2024年11月20日
- 柏菲·珞叔作品集《金色大厅2》限量开盘母带ORMCD[低速原抓WAV+CUE]
- Gareth.T《sad songs(Explicit)》[320K/MP3][29.03MB]
- Gareth.T《sad songs(Explicit)》[FLAC/分轨][152.85MB]
- 证声音乐图书馆《海风摇曳·盛夏爵士曲》[320K/MP3][63.06MB]
- 龚玥《金装龚玥HQCD》头版限量[WAV分轨]
- 李小春《吻别》萨克斯演奏经典[原抓WAV+CUE]
- 齐秦《辉煌30年24K珍藏版》2CD[WAV+CUE]
- 证声音乐图书馆《海风摇曳·盛夏爵士曲》[FLAC/分轨][321.47MB]
- 群星 《世界经典汽车音乐》 [WAV分轨][1G]
- 冷漠.2011 《冷漠的爱DSD》[WAV+CUE][1.2G]
- 陈明《流金岁月精逊【中唱】【WAV+CUE】
- 群星《Jazz-Ladies1-2爵士女伶1-2》HQCD/2CD[原抓WAV+CUE]
- 群星《美女私房歌》(黑胶)[WAV分轨]
- 郑源.2009《试音天碟》24BIT-96KHZ[WAV+CUE][1.2G]
- 飞利浦试音碟 《环球群星监听录》SACD香港版[WAV+CUE][1.1G]