Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNN Support Added #94

Open
wants to merge 18 commits into
base: master
Choose a base branch
from

Conversation

Prasanna28Devadiga
Copy link

@Prasanna28Devadiga Prasanna28Devadiga commented Aug 27, 2021

Solves issue #77
Model is configured via the YAML file. Have provided an yaml file example. Tested on Colab. Utilising Keras ensured that we could support a variety of layers besides just the standard Conv2D. Have added docstrings as per the contributing guidelines
@nidhaloff Is this adequate? Kindly give me a review

@nidhaloff
Copy link
Owner

@Prasanna28Devadiga Thanks for the PR. This looks better than the last one. I will review it in detail asap. I may add some comments if I found something that can be optimized or improved

We stick with the defaults in case the user doesnt provide them

"""
with open(path, 'rb') as f:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS: You can use the read_yaml function from utils.py to read a yaml file

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright will do

"""
with open(path, 'rb') as f:
conf = yaml.safe_load(f.read())
self.batch_size= conf['model']['arguments'].get('batch_size',32)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can move all default values to a defaults.py or config.py file. This will make it easier to read and maintain the code

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes right i should be doing that

pool_size_2d= tuple(pool_size_2d)
pool_size_3d= tuple(pool_size_3d)

if x == "Dense":
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we get rid of these if else? maybe with a dict/map and a function...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey thats a pretty good idea. The challenge i was faced with was to map these strings to the respective class layers. I think i should be able to use a dictionary to do that that , and maybe just remove all this clutter from the main file

class_mode: "sparse"
target_size: [28,28]
loss: "SparseCategoricalCrossentropy"
model_layers:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use the example I posted in the discussion comments? #76

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No particular reason actually. I'll make commits with the example you mentioned if thats required.

#model.summary()
return model

def generate_dataset(self):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this function work for all datasets? is it dynamic or it would only work for the mnist example

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes its dynamic. All it does is read the data from a data frame named as train.csv , and then create two image data-loaders.
the first column of the dataframe will be assumed to contain the paths , and the next column the corresponding labels.

Talking about mnist though i just realized the default value for the imagesize variable is (784,). This obviously is a bug as the default value should be (256^2, ) considering the data generator resizes all images to (256,256). I'll fix this with the next commit.

@Prasanna28Devadiga
Copy link
Author

@nidhaloff Thank you for the detailed review. This was extremely insightful. As a result I've noticed a few key areas where improvements can be made , and some bug fixes as well.
I intend to fix these as soon as possible. I'll even add a few unit tests with the next commit. My university exams are due so might take a bit longer. Let me know if that's fine :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants