如何抑制冗长的Tensorflow日志记录? [重复]
deep-learning
tensorflow
8
0

我正在用鼻子测试对我的Tensorflow代码进行单元测试,但是它会产生如此大量的详细输出,使其变得无用。

以下测试

import unittest
import tensorflow as tf

class MyTest(unittest.TestCase):

    def test_creation(self):
        self.assertEquals(True, False)

当使用nosetests运行时会创建大量无用的日志记录:

FAIL: test_creation (tests.test_tf.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/cebrian/GIT/thesis-nilm/code/deepmodels/tests/test_tf.py", line 10, in test_creation
    self.assertEquals(True, False)
AssertionError: True != False
-------------------- >> begin captured logging << --------------------
tensorflow: Level 1: Registering Const (<function _ConstantShape at 0x7f4379131c80>) in shape functions.
tensorflow: Level 1: Registering Assert (<function no_outputs at 0x7f43791319b0>) in shape functions.
tensorflow: Level 1: Registering Print (<function _PrintGrad at 0x7f4378effd70>) in gradient.
tensorflow: Level 1: Registering Print (<function unchanged_shape at 0x7f4379131320>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (None) in gradient.
tensorflow: Level 1: Registering HistogramSummary (None) in gradient.
tensorflow: Level 1: Registering ImageSummary (None) in gradient.
tensorflow: Level 1: Registering AudioSummary (None) in gradient.
tensorflow: Level 1: Registering MergeSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering MergeSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering AudioSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering ImageSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering Pack (<function _PackShape at 0x7f4378f047d0>) in shape functions.
tensorflow: Level 1: Registering Unpack (<function _UnpackShape at 0x7f4378f048c0>) in shape functions.
tensorflow: Level 1: Registering Concat (<function _ConcatShape at 0x7f4378f04938>) in shape functions.
tensorflow: Level 1: Registering ConcatOffset (<function _ConcatOffsetShape at 0x7f4378f049b0>) in shape functions.

......

而从ipython控制台使用tensorflow似乎并不那么冗长:

$ ipython
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
Type "copyright", "credits" or "license" for more information.

IPython 4.2.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally

In [2]:

运行鼻子测试时如何抑制以前的记录?

参考资料:
Stack Overflow
收藏
评论
共 2 个回答
高赞 时间 活跃

2.0更新(10/8/19)设置TF_CPP_MIN_LOG_LEVEL应该仍然有效(请参见v0.12 +更新中的以下内容),但是当前存在一个问题(请参阅问题#31870 )。如果设置TF_CPP_MIN_LOG_LEVEL对您不起作用(再次参见下文),请尝试执行以下操作来设置日志级别:

import tensorflow as tf
tf.get_logger().setLevel('INFO')

此外,请参阅tf.autograph.set_verbosity上的文档,该文档设置签名日志消息的详细程度-例如:

# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)

v0.12 +更新(5/20/17),通过TF 2.0+:

在这个问题的 TensorFlow 0.12+中,您现在可以通过名为TF_CPP_MIN_LOG_LEVEL的环境变量控制日志记录;它默认为0(显示所有日志),但可以在“ Level列下设置为以下值之一。

  Level | Level for Humans | Level Description                  
 -------|------------------|------------------------------------ 
  0     | DEBUG            | [Default] Print all messages       
  1     | INFO             | Filter out INFO messages           
  2     | WARNING          | Filter out INFO & WARNING messages 
  3     | ERROR            | Filter out all messages      

请参阅以下使用Python的通用OS示例:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}
import tensorflow as tf

为了更全面,您还调用了设置Python tf_logging模块的级别,该模块用于摘要操作,张量板,各种估计器等。

# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}

对于1.14,如果不按以下说明使用v1 API,则会收到警告:

# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}


For Prior Versions of TensorFlow or TF-Learn Logging (v0.11.x or lower):

查看以下页面以获取有关TensorFlow日志记录的信息;使用新更新,您可以将日志记录的详细程度设置为DEBUGINFOWARNERRORFATAL 。例如:

tf.logging.set_verbosity(tf.logging.ERROR)

该页面还翻过了可与TF-Learn模型一起使用的监视器。 这是页面

但是,这不会阻止所有日志记录(仅TF-Learn)。我有两种解决方案;一种是“技术上正确的”解决方案(Linux),另一种是重建TensorFlow。

script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'

对于其他,请参见此答案 ,其中涉及修改源和重建TensorFlow。

收藏
评论

使用nosetests --nologcapture测试nosetests --nologcapture运行测试将禁用这些日志的显示。有关记录鼻子测试的更多信息: https ://nose.readthedocs.io/en/latest/plugins/logcapture.html

收藏
评论
新手导航
  • 社区规范
  • 提出问题
  • 进行投票
  • 个人资料
  • 优化问题
  • 回答问题