法一:
循环打印
模板
for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, y
实例
# coding=utf-8 import tensorflow as tf def func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUTO_REUSE): bn = tf.contrib.layers.batch_norm(inputs=in_put, decay=0.9, is_training=is_training, updates_collections=None) return bn def main(): with tf.Graph().as_default(): # input_x input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) import numpy as np i_p = np.random.uniform(low=0, high=255, size=[1, 4, 4, 1]) # outputs output = func(input_x, 'my', is_training=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) t = sess.run(output, feed_dict={input_x:i_p}) # 法一: 循环打印 for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, y if __name__ == "__main__": main()
2017-09-29 10:10:22.714213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) <tf.Variable 'my/BatchNorm/beta:0' shape=(1,) dtype=float32_ref> [ 0.] <tf.Variable 'my/BatchNorm/moving_mean:0' shape=(1,) dtype=float32_ref> [ 13.46412563] <tf.Variable 'my/BatchNorm/moving_variance:0' shape=(1,) dtype=float32_ref> [ 452.62246704] Process finished with exit code 0
法二:
指定变量名打印
模板
print 'my/BatchNorm/beta:0', (sess.run('my/BatchNorm/beta:0'))
实例
# coding=utf-8 import tensorflow as tf def func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUTO_REUSE): bn = tf.contrib.layers.batch_norm(inputs=in_put, decay=0.9, is_training=is_training, updates_collections=None) return bn def main(): with tf.Graph().as_default(): # input_x input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) import numpy as np i_p = np.random.uniform(low=0, high=255, size=[1, 4, 4, 1]) # outputs output = func(input_x, 'my', is_training=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) t = sess.run(output, feed_dict={input_x:i_p}) # 法二: 指定变量名打印 print 'my/BatchNorm/beta:0', (sess.run('my/BatchNorm/beta:0')) print 'my/BatchNorm/moving_mean:0', (sess.run('my/BatchNorm/moving_mean:0')) print 'my/BatchNorm/moving_variance:0', (sess.run('my/BatchNorm/moving_variance:0')) if __name__ == "__main__": main()
2017-09-29 10:12:41.374055: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) my/BatchNorm/beta:0 [ 0.] my/BatchNorm/moving_mean:0 [ 8.08649635] my/BatchNorm/moving_variance:0 [ 368.03442383] Process finished with exit code 0
以上这篇tensorflow 打印内存中的变量方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
华山资源网 Design By www.eoogi.com
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
华山资源网 Design By www.eoogi.com
暂无评论...
更新日志
2024年09月25日
2024年09月25日
- 范文芳.1996-FNNTASY【HYPE】【WAV+CUE】
- 纯音入心系列纯音乐《韵味古筝曲》1CD[MP3][146.2MB]
- 纯音入心系列纯音乐《古筝系列-传统乐曲》1CD[MP3][974.7MB]
- 纯音入心系列纯音乐《古筝系列-弦凝指咽声停处》1CD[MP3][656.5MB]
- 群星.1994-大烂片2辑【派森】【WAV+CUE】
- 吴倩莲.1997-望爱【EMI百代】【WAV+CUE】
- 杨千嬅.2002-万紫千红演唱会2CD(2024环球红馆40复刻系列)【环球】【WAV+CUE】
- 郭采洁.2015-Begin.Again爱造飞鸡【华纳】【FLAC分轨】
- 许志安.2011-ON.AND.ON【东亚】【WAV+CUE】
- 潘秀琼.1994-木兰从军【名将】【WAV+CUE】
- 纯音入心系列纯音乐《古筝系列-当流行乐遇到古筝》1CD[MP3][1.9GB]
- 纯音入心系列纯音乐《精选古筝名曲100首》1CD[MP3][388.7MB]
- 纯音入心系列纯音乐《天籁古筝》1CD[MP3][331MB]
- 男女对唱典藏天碟《发烧对唱·那个季节里的歌DSD》10CD[WAV]
- 群星2010-歌林精选辑[歌林][WAV+CUE]