Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Model for Time Series Classification #253

Merged
merged 5 commits into from Sep 16, 2022

Conversation

codeboy5
Copy link
Contributor

I have added the code for a basic RNN Model for the task of time series classification.

> data, blocks = load(datarecipes()["ecg5000"]);
> task = TSClassificationSingle(blocks, data);
> model = FastAI.taskmodel(task);
> traindl, validdl = taskdataloaders(data, task, 32);
> callbacks = [ToGPU(), Metrics(accuracy)];
> learner = Learner(model, tasklossfn(task); data=(traindl, validdl), optimizer=ADAM(), callbacks = callbacks);
> fitonecycle!(learner, 10, 0.033)

image

As I discussed with @darsnack, the idea is to add an encoding to do the reshaping to (features, batch, time) instead of doing it inside on the RNN Model. Working on that right now.

Have remove the type from StackedLSTM as it was redundant.

@codeboy5
Copy link
Contributor Author

I have also added a multivariate classification dataset and verified the training loop is working fine for it.

Copy link
Member

@ToucheSir ToucheSir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of drive-by comments on the WIP here

FastTimeSeries/src/models/RNN.jl Outdated Show resolved Hide resolved
FastTimeSeries/src/models.jl Outdated Show resolved Hide resolved
@codeboy5
Copy link
Contributor Author

Hey guys, @ToucheSir @darsnack @lorenzoh. I have updated the notebook. This PR essentially contains code to load regression datasets and the notebook for the classification task.
I will start adding the Inception Time Model in the next PR as we discussed during our weekly call.

Copy link
Member

@ToucheSir ToucheSir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to go modulo a couple small things!

FastTimeSeries/src/models/StackedLSTM.jl Show resolved Hide resolved
FastTimeSeries/src/models/RNN.jl Show resolved Hide resolved
FastTimeSeries/src/models/RNN.jl Show resolved Hide resolved
FastTimeSeries/src/models.jl Show resolved Hide resolved
FastTimeSeries/src/models.jl Show resolved Hide resolved
function blockmodel(inblock::TimeSeriesRow,
outblock::OneHotTensor{0},
backbone)
data = rand(Float32, inblock.nfeatures, 32, inblock.obslength)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be using Flux.outputsize (I think we talked about this in a meeting). Changes required should be minimal :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried doing that, but seems to give an error. Will try to debug it later.
https://pastebin.com/mB7aYvdD

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, this is FluxML/Flux.jl#1755 (comment). Can you do the following then?

  • Use zeros or ones instead of rand.
  • Pass a batch size of 1 instead of 32 and seq length of 1 instead of inblock.obslength. This will save some time and you don't need it because you only care about the first output dimension.
  • reset! the backbone after you calculate output.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, sounds great.

InlineTest = "bd334432-b1e7-49c7-a2dc-dd9149e4ebd6"
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

I don't see Zygote used anywhere, so one less dependency doesn't hurt.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried doing that, but it throws an error

ERROR: LoadError: ArgumentError: Package FastTimeSeries does not have Zygote in its dependencies:`

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This really could use a full stack trace, because AFAICT you're not using Zygote anywhere? Did you ] resolve after removing it from the project?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems to work now, might have missed the resolve. thanks :)

FastTimeSeries/Project.toml Show resolved Hide resolved
@lorenzoh
Copy link
Member

lorenzoh commented Sep 3, 2022

Great progress in my absence and cool to see the training working!

Are the outstanding comments being addressed in #256 or could this be merged individually beforehand?

@codeboy5
Copy link
Contributor Author

codeboy5 commented Sep 3, 2022

Great progress in my absence and cool to see the training working!

Are the outstanding comments being addressed in #256 or could this be merged individually beforehand?

I am resolving the outstanding comments in #256 , some of them have been resolved and a couple are left, which I will resolve by end of day. This can be merged then I believe.

@codeboy5
Copy link
Contributor Author

I think this PR is complete ?
Comments have been resolved in the next PR, let me know if something is missing.

@ToucheSir
Copy link
Member

I wasn't aware #256 was also working on the StackedRNN model. It would be preferable to merge those changes here so that PR stays more focused on the InceptionTime. If not, we can just close this one and focus on that one (I'm interpreting your comment to mean that the changes in #256 completely subsume this PR).

@codeboy5
Copy link
Contributor Author

I wasn't aware #256 was also working on the StackedRNN model. It would be preferable to merge those changes here so that PR stays more focused on the InceptionTime. If not, we can just close this one and focus on that one (I'm interpreting your comment to mean that the changes in #256 completely subsume this PR).

PR #256 does contain the code changes in this PR, since I created that branch from this branch. Other than that, just some renaming that's done there, etc.
If we merge this, would those commits not be committed in that PR (since those commits are already committed to the master branch ) ?
Specifically Commits from added RNN Model to updated notebook.
If yes, we can merge this and then merge that one. Otherwise we can close this and merge that one.

@ToucheSir
Copy link
Member

You'd likely have to rebase #256 after merging this PR if you've made changes to the same functions/types in that one. If that doesn't sound like a problem, we can merge this first.

@codeboy5
Copy link
Contributor Author

You'd likely have to rebase #256 after merging this PR if you've made changes to the same functions/types in that one. If that doesn't sound like a problem, we can merge this first.

Wouldn't be an issue. 👍🏻

Copy link
Member

@ToucheSir ToucheSir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merging as noted to follow up in #256.

@ToucheSir ToucheSir merged commit 3d70b50 into FluxML:master Sep 16, 2022
@codeboy5 codeboy5 deleted the ts-models branch September 17, 2022 03:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants