New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: Undefined symbols in CompiledModel are automatically set to zero #90

Merged
merged 8 commits into from May 9, 2017

Conversation

Projects
None yet
2 participants
@richardotis
Collaborator

richardotis commented May 5, 2017

Fixes gh-54.

The warning mechanism here (using the logging module) is the same as what is used in calculate to emit the same warning for Model, but it is different from the mechanism (using the warnings module) in equilibrium. I think there may be some kind of reason why I originally picked the logging module (perhaps some ability to redirect or disable messages selectively), but it did not seem to catch on with me for whatever reason. I am also not sure how to do the equivalent of warnings.catch_warnings with the logging module to programmatically verify we are emitting the correct warnings in the test.

@richardotis richardotis requested a review from bocklund May 5, 2017

@bocklund

This comment has been minimized.

Collaborator

bocklund commented May 6, 2017

You can set up filtering with warnings similarly, I think. IMO logging should be reserved for looking at what happened in history and shouldn't be used to let the user know that something went wrong at runtime.

Also I believe logging output has historically been more tricky to make visible in jupyter notebooks without manually importing logging and setting the log level. So from that and a binder or jupyter hub perspective, I'd like to see this in warnings unless there were a compelling reason to use logging.

All that being said, is this worth issuing a warning in logging or otherwise?

@richardotis

This comment has been minimized.

Collaborator

richardotis commented May 6, 2017

This particular warning (undefined symbol) has saved me a few times when I typo'd a FUNCTION name or forgot to define it. It helped me figure out why I was getting nonsensical results. Usually an undefined symbol is a mistake, and if I were designing TDBs from scratch I would make it an error, but setting it to zero matches the behavior in TC and is sometimes convenient when building new databases.

I'll take a look at changing the warning method from logging to the warnings module here and also in calculate.

@richardotis

This comment has been minimized.

Collaborator

richardotis commented May 6, 2017

For the record, the way to programmatically capture logging output is discussed here: http://alanwsmith.com/capturing-python-log-output-in-a-variable

With the key part of it reproduced here

import logging
import io

### Create the logger
logger = logging.getLogger('basic_logger')
logger.setLevel(logging.DEBUG)

### Setup the console handler with a StringIO object
log_capture_string = io.StringIO()
ch = logging.StreamHandler(log_capture_string)
ch.setLevel(logging.DEBUG)

### Optionally add a formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)

### Add the console handler to the logger
logger.addHandler(ch)


### Send log messages. 
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')


### Pull the contents back into a string and close the stream
log_contents = log_capture_string.getvalue()
log_capture_string.close()

### Output as lower case to prove it worked. 
print(log_contents.lower())
@bocklund

Nice. Do you think we should check that the parameters were actually set to zero by looking at the calculation result? Maybe something hard coded that ensures that we don't accidentally set undefined parameters to 1 or -10000 in the future?

@richardotis

This comment has been minimized.

Collaborator

richardotis commented May 8, 2017

Good idea.

@bocklund

Ready to merge

@richardotis

This comment has been minimized.

Collaborator

richardotis commented May 8, 2017

@bocklund Trying to figure out why the Linux py27 tests are hanging. I thought maybe it's a dependency issue but the previous commit runs fine when restarted.

@richardotis

This comment has been minimized.

Collaborator

richardotis commented May 8, 2017

I cannot reproduce this in a clean local py27 environment.

@richardotis

This comment has been minimized.

Collaborator

richardotis commented May 9, 2017

The coverage check looks bugged but otherwise everything passes, so I'll merge.

@richardotis richardotis merged commit 5bc5ed3 into develop May 9, 2017

4 of 5 checks passed

coverage/coveralls Coverage pending from Coveralls.io
Details
continuous-integration/appveyor/branch AppVeyor build succeeded
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details

@richardotis richardotis deleted the fix_issue54 branch Nov 14, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment