Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incorrect app.flags.DEFINE_boolean behavior #379

Closed
mborgerding opened this issue Nov 30, 2015 · 5 comments
Closed

incorrect app.flags.DEFINE_boolean behavior #379

mborgerding opened this issue Nov 30, 2015 · 5 comments
Assignees

Comments

@mborgerding
Copy link
Contributor

demo python script

test.py

#!/usr/bin/python
import tensorflow as tf
tf.app.flags.DEFINE_boolean('tea',False,'Soggy Leaves')
cfg = tf.app.flags.FLAGS

if cfg.tea:
    print( "have some tea" )
else:
    print( "no tea" )

Expected behavior

When the above test.py is run with any of {(no arguments), --notea, or --tea=True}, the script behaves as expected...

[markb@schur tmp]$ ./test.py
no tea
[markb@schur tmp]$ ./test.py --notea
no tea
[markb@schur tmp]$ ./test.py --tea=True
have some tea

Weird stuff

Strangely --tea=False is incorrectly interpreted as True. Having both tea and no tea is only possible in certain Infocom games.

[markb@schur tmp]$ ./test.py --tea=False
have some tea

Apparently, only those who truly want tea will get it.

[markb@schur tmp]$ ./test.py --tea
usage: test.py [-h] [--tea TEA] [--notea]
test.py: error: argument --tea: expected one argument

C'mon. It's a boolean flag !
"Say something once. Why say it again?" -- Talking Heads

@vrv vrv self-assigned this Nov 30, 2015
@vrv
Copy link

vrv commented Dec 1, 2015

Looks like we're getting hit by: http://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse

It's doing bool("False") --> True.

I'll try to get it to support:

--tea --> tea is True
--tea=True -> tea is True
--tea=False -> tea is False
--notea --> tea is False

@vrv vrv closed this as completed in 795f35d Dec 1, 2015
teamdandelion pushed a commit to tensorflow/tensorboard that referenced this issue May 23, 2017
Change:
	Clean up documentation for ReverseSequence
Change:
	Updated several tensorflow operations to use 32bit indices on GPU.
Change:
	Add attribute batch_dim to ReverseSequenceOp.
Change:
	Fix error in convert_to_records.py.  As reported in
	tensorflow/tensorflow#370
	by AlexUnderMicrocontRoll.
Change:
	Update TensorBoard README.
Change:
	Fixes to boolean flags reported in
	tensorflow/tensorflow#379.  Supports:

	--bool_flag=True  --> True
	--bool_flag=False  --> False
	--bool_flag=gibberish  --> False
	--bool_flag --> True
	--nobool_flag --> False

	Fixes #379
Change:
	Update generated Op docs.
Change:
	Enable local development of TensorBoard using gulp
	Also make tf-tensorboard a regular component rather than special case

	This is mostly effected by creating tfserve.js, which is a small server
	with clever routing to load from bower_components/ and components/ using
	the paths that work within google3.

	Workflow: `gulp serve`
Change:
	Add a full working code example to the tensorboard and summaries tutorial
Change:
	Fix seq2seq_test when running on GPU.

	The "proj_w" and "proj_b" variables were being created before the
	`test_session()`'s device function took effect, which pushed the
	placement algorithm into making an incorrect decision.
Change:
	Add a sentence in TensorBoard README on how to serialize summary data to logs and provide link to the how-to tutorial on the TensorFlow website.
Change:
	Add error-catching code if string_input_producer is supplied a null input.
	Before this change, it would die with an opaque shape error from inside
	the queue.  This change catches (most) python null lists being
	passed directly in, and at runtime detects null tensors.

	Adds two tests for this to input_test.py
Change:
	Speed up for models that use the same variable multiple times in the case
	where variables must be copied across devices:
	- Have Variables wrap the Variable op in an Identity op when converted to Tensor.
	  This avoids multiple copies across devices if a variable is used multiple time
	  in a computation.
	- Add Variable.mutable() to return the non-wrapped Variable op for used when
	  assigning new values.
	- Add an as_ref parameter to convert_to_tensor() to allow code to specify
	  if they plan to assign a new value to the result of the conversion.  Make Variable
	  return the result of Variable.mutable() when as_ref is True.
	- Make all ops that assign values to variables pass as_ref=True when converting
	  their arguments.
Change:
	Change to reduce critical section times in gpu_event_mgr.h:
	(1) Call stream->ThenRecordEvent outside the EventMgr critical section
	(2) Do memory deallocation outside the critical section

	Speeds up one configuration of ptb_word_lm from 2924 words per
	second (wps) to 3278 wps on my desktop machine with a Titan X.
Change:
	Remove some colons that break the open source build

	::tensorflow::StringPiece breaks for @raingo, see
	tensorflow/tensorflow#358.
	tensorflow::StringPiece (without the leading colons)
	seems to fix the problem.
Change:
	Added check that inputs to Operation is a list and make a defensive copy of the input. This is for cases where the input list is changed such as in _add_input.
Change:
	Use standard names for TensorFlow dtypes in the tutorial.
Change:
	Add tests for tensor inputs.
Change:
	Fix build after declaring more types for ops
Change:
	Switch to 32 bit indexing to speedup convolutions and concatenations.
Change:
	Add convert_image op to convert between types for images (similar to OpenCV's cvtScale).
Change:
	Make cast work between numeric types (bool, uint8, int16, int32, int64, float, double).
Change:

	Padding input data for odd number of paddings, so we can use cudnn anyway.
	+ Fix total padding computation when padding==VALID.
	+ This CL makes the Googlenet benchmark run 5x faster.

Change:
	Support IndexedSlices in ConcatGrad
Change:
	* sampled softmax op uses one embedding lookup for positive and negative samples
	* float64 support for sampled softmax
Change:
	Move RNN code out of models.rnn (without breaking existing code).  The API may still undergo minor changes, until full documentation as added.
Change:
	Changed to use per-step stacks for the accumulators used in while-loop gradient computation. This addresses the problem caused by using concat without sufficient static shape information. It should also improve performance as we avoided those expensive concats.
Change:
	Update generated Op docs.
Change:
	Improve error messages when the optimizer finds no variables to minimize or
	when none of the variables has gradients.
Change:
	Say that -1 isn't just for flattening in reshape docs

	Also add scalar reshape (reshape(t, [])) as an example.

	This fixes tensorflow/tensorflow#281.
Change:
	This is a test.

Base CL: 109118714
tarasglek pushed a commit to tarasglek/tensorflow that referenced this issue Jun 20, 2017
Example of data conversion and fix comments
@gokul-uf
Copy link

gokul-uf commented Nov 22, 2018

@vrv
this issue still persists as of TF 1.12.0
All the examples above work as expected
but ./test.py --tea False fails as it gives output have some tea . Is this still expected behavior?

@vrv
Copy link

vrv commented Nov 22, 2018

TensorFlow's implementation of flags was replaced by absl: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/platform/flags.py

So any flag related issues should probably be filed at https://github.com/abseil/abseil-py/issues now, unless there's evidence that the wrapper in TensorFlow is the issue.

@gokul-uf
Copy link

Thanks, I will raise it there

@MartinPeris
Copy link

I know this has been closed for a long time and tf.app.flags are not really supported. But defining tea like this will give you the expected behavior:

tf.app.flags.DEFINE_boolean('tea',None,'Soggy Leaves')

e-budur added a commit to e-budur/bert that referenced this issue Apr 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants