Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making AdventV2 automatic #137

Closed
wants to merge 24 commits into from
Closed

Making AdventV2 automatic #137

wants to merge 24 commits into from

Conversation

tianyu-z
Copy link
Contributor

No description provided.

@tianyu-z tianyu-z changed the title a quick PR to add an option for adventv2 entropy calculation Making AdventV2 automatic Sep 16, 2020
@tianyu-z tianyu-z marked this pull request as draft September 16, 2020 11:02
@tianyu-z tianyu-z marked this pull request as ready for review September 16, 2020 14:56
@tianyu-z tianyu-z mentioned this pull request Sep 16, 2020
@@ -4,10 +4,14 @@
from pathlib import Path
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you could move this file to the folder utils_scripts/ ? :) I think it would make more sense for it to be there.

train.py Outdated
@@ -52,6 +59,17 @@ def main(opts):
opts.output_path = str(get_increased_path(opts.output_path))
pprint("Running model in", opts.output_path)

# check if auto adventv2 works
Copy link
Contributor

@alexrey88 alexrey88 Sep 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could just write:
if opts.train.lambdas.advent.is_auto_adventv2: {...}
instead of using the brackets, like it's done in the rest of the file, to keep the same formatting

omnigan/utils.py Outdated
return opts


def adventv2EntropySplit(trainer, verbose=1):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know where I would put this function exacty, but I'm not sure if it fits well in a util file, specially since it's a big function that depends on another function in losses.py?

@@ -75,7 +86,7 @@ def merge_JsonFiles(filename):
with open(f1, "r") as infile:
result.extend(json.load(infile))

with open("easy_split_with_orignal_sim.json", "w") as output_file:
with open(args.save_path + "easy_split_with_orignal_sim.json", "w") as output_file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use

from pathlib import Path

...

with (Path(args.save_path) / "easy_split_with_orignal_sim.json").open("w") as output_file:
    ...

Because then you don't need to rely on args.save_path ending with /

omnigan/utils.py Outdated


def tupleList2DictList(tuples, keys=["x", "m"]):
DictList = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

being peaky here but convention is to use _ instead of camel case: aaa_bbb instead of aaaBbb. + in python variables should not start with a capital letter, this is reserved for classes (though it's still syntactically correct, it will run)

omnigan/utils.py Outdated
with open(f1, "r") as infile:
result.extend(json.load(infile))

with open(save_path + "easy_split_with_orignal_sim.json", "w") as output_file:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here: use Path() to handle paths, it's way safer

# start training if the expected training epochs is not 0
if opts.train.epochs != 0:
trainer.train()
if is_auto_adventv2:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there needs to be a check around the number of epochs because the default is 100000000 meaning endless phase 1...

@vict0rsch
Copy link
Contributor

Small changes requested I think it would make more sense for you to walk us through the process to criticize it constructively :) Do you have reference experiments to look at how much v2 would help?

@tianyu-z
Copy link
Contributor Author

Small changes requested I think it would make more sense for you to walk us through the process to criticize it constructively :) Do you have reference experiments to look at how much v2 would help?

https://www.comet.ml/tianyu-z/omnigan/0a10bdb1c39a49d19e3e2135540aac2b/b1bf90d2562b45e2ad97369bf10b48d0/compare?experiment-tab=images&imageId=69c01437ab5c449781faa505eda811d6

The experiment is here. The entropy_split term for this experiment is 0.67 which means 67% of the original sim data will be in easy_split.json and 1/3 will be in hard_split.json. I have run entropy_split>0.67 and it seems 0.67 is better than other cases (by my eyeballing). Entropy_split<0.67 is still running. I am able to give the result tomorrow.

@tianyu-z tianyu-z changed the base branch from master to add_utils September 23, 2020 16:35
@tianyu-z tianyu-z changed the base branch from add_utils to master September 23, 2020 16:36
@vict0rsch
Copy link
Contributor

Closing for now since we won't use it

@vict0rsch vict0rsch closed this Oct 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants