Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix command line run for refinforce_learn_qnet in pl_examples #5414

Merged
merged 7 commits into from
Jan 19, 2021

Conversation

sidhantls
Copy link
Contributor

@sidhantls sidhantls commented Jan 8, 2021

What does this PR do?

Unused default arguments in argparse were removed from refinforce_learn_qnet.py that was causing an error through command line. Previously, it was adding model specific args, warm_start_size and max_episode_reward, both of which the model doesn't take inputs to. I also pass add_helper=False to ArgumentParser because of the conflicting help options error (issue 5382).

warm_start_size and warm_start_steps both mean the same thing, but only warm_start_steps is used in the model so we can remove warm_start_size argument. Same as in its implementation in pl_bolts

Fixes #5381, #5382 (duplicate)

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Check that target branch and milestone match!

Did you have fun?

Make sure you had fun coding 馃檭

@codecov
Copy link

codecov bot commented Jan 8, 2021

Codecov Report

Merging #5414 (35a3afc) into master (18d2ae8) will not change coverage.
The diff coverage is n/a.

@@          Coverage Diff           @@
##           master   #5414   +/-   ##
======================================
  Coverage      93%     93%           
======================================
  Files         135     135           
  Lines       10005   10005           
======================================
  Hits         9339    9339           
  Misses        666     666           

Copy link
Member

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch

@Borda Borda added the example label Jan 8, 2021
@Borda Borda added this to the 1.1.x milestone Jan 8, 2021
@Borda Borda added the ready PRs ready to be merged label Jan 8, 2021
@Borda Borda enabled auto-merge (squash) January 8, 2021 10:10
@Borda Borda self-assigned this Jan 8, 2021
@sidhantls
Copy link
Contributor Author

@Borda Sure :)

Unrelated to this pr- this example does not allow command line arguments for Trainer as most of the other examples do. Can only pass model specific args. If you think this is will be useful lmk, will submit a pr for it

@@ -424,7 +420,7 @@ def main(args) -> None:
torch.manual_seed(0)
np.random.seed(0)

parser = argparse.ArgumentParser()
parser = argparse.ArgumentParser(add_help=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you set add_help=False ? It is always better to enable help.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code doesn't run with add_help=True ( #5382)

Since we supply our own help text for each argument, I think we can set this as False. With set to False, running python pl_examples\domain_templates\reinforce_learn_Qnet.py --help still displays the custom help options we provided.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

edit: here add_help=False is in the parent parser, but for the parser adding model specific args add_help=True by default. When it's set to True on both parsers, this conflicting option error occurs so one of them can be set to False

@Borda Borda requested a review from tchaton January 14, 2021 23:14
@Borda Borda dismissed tchaton鈥檚 stale review January 19, 2021 07:38

pls re-review

@Borda Borda merged commit 18bba25 into Lightning-AI:master Jan 19, 2021
Borda pushed a commit that referenced this pull request Feb 4, 2021
* fix wrong argument in argparse

* remove wrong default arg in argparser

* disable add help argparse
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
example ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

pl_examples reinforce_learn_qnet wrong argument name and description in argparse
6 participants