Skip to content

container_log_level does not work in TensorFlow estimator #1875

@tailaiw

Description

@tailaiw

Describe the bug
Parameter container_log_level does not work in TensorFlow estimator

To reproduce
I have a TensorFlow estimator built roughly as follows

import logging
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
        sagemaker_session=sagemaker_session,
        entry_point="my_entry_point.py",
        source_dir=my_source_dir,
        role=my_role,
        instance_count=1,
        instance_type="ml.p3.2xlarge",    
        framework_version="2.1",
        py_version="py3",
        checkpoint_s3_uri=my_checkpoint_s3_uri,
        container_log_level=logging.WARNING)

I expect now logs lower than WARNING will be included in the training job logs. However, a lot of INFO level logs are observed. Particularly, a lot of INFO logs related to frequent checkpoint uploading (from instance to S3) are observed, which makes the entire log super long. I tried logging.ERROR and no luck either.

Expected behavior
No log except WARNING and ERROR level ones should be observed.

Screenshots or logs
If applicable, add screenshots or logs to help explain your problem.
image

System information
A description of your system. Please provide:

  • SageMaker Python SDK version: 2.5.1
  • Framework name (eg. PyTorch) or algorithm (eg. KMeans): TensorFlow
  • Framework version: 2.1
  • Python version: 3.7
  • CPU or GPU: GPU
  • Custom Docker image (Y/N): N

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions