Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analysing the performance of different methods to get windows #62

Closed
hubertjb opened this issue Jan 30, 2020 · 25 comments
Closed

Analysing the performance of different methods to get windows #62

hubertjb opened this issue Jan 30, 2020 · 25 comments

Comments

@hubertjb
Copy link
Collaborator

hubertjb commented Jan 30, 2020

I've started looking at the performance of various ways of getting windows:
1- MNE: epochs.get_data(ind)[0] with lazy loading (preload=False)
2- MNE: epochs.get_data(ind)[0] with eager loading (preload=True)
3- MNE: direct access to the internal numpy array with epochs._data[index] (requires eager loading)
4- HDF5: using h5py (lazy loading)

The script that I used to run the comparison is here:
https://github.com/hubertjb/braindecode/blob/profiling-mne-epochs/test/others/profiling_mne_epochs.py
Also, I ran the comparison on a single CPU using:
>>> taskset -c 0 python profiling_mne_epochs.py

Here's the resulting figure, where the x-axis is the number of time samples in the continuous recording:
timing_results

For the moment, it looks like:
1- ._data[index] is unsurprisingly the fastest, however it requires to load the entire data into memory.
2- hdf5 is very close, with around 0.5 ms per loop, which is great knowing it's able to only load one window at a time.
3- get_data(index) is much slower, but this is expected as we know it creates a new mne.Epochs object every time it's called. Also, the gap between preload=True and preload=False is about 1.5 ms, which might be OK. The main issue though seems to be the linear increase of execution time as the continuous data gets bigger and bigger.

Next steps

Considering the benefits of using MNE for handling the EEG data inside the Dataset classes, I think it would be important to dive deeper into the inner workings of get_data() to see whether simple changes could make this more efficient. I can do some actual profiling on that. What do you think @agramfort @robintibor @gemeinl ?

Note: I haven't included the extraction of labels in this test.

@robintibor
Copy link
Contributor

Cool, thanks for the clear info! Yes, diving a bit deeper may be helpful. Keep in mind: we will need fast access mainly during the training loop, so directly before returning some tensor/ndarray (in the usual case) that will be passed to the deep network. So for preload=True, accessing _data may be fine to me. The question is more the preload=False case, if this one can be fast enough in mne as well. So the relatively small gap for get_data there is encouraging for sure.

You could additionally do the following on reasonable GPU to know better what kind of times we may need to reach in the end: Forward one dummy batch size (64,22,1000) through the deep and shallow network, compute classification loss with dummy targets, and do the backward, measure the wall clock time (don't use profilers here for now, they may not work well with GPU). Then we have a rough time we want to reach...

@hubertjb
Copy link
Collaborator Author

Yes, I agree that the interesting case here is preload=False! With some profiling I realized that the bottleneck is the indexing and not get_data(). I came up with a super basic method get_single_epoch() that directly calls the function to read from file instead of first creating a new Epochs object as the standard indexing does. The results are pretty encouraging so far (sorry not same colours as above):

timing_results

It looks like we could get something pretty close to HDF5 lazy loading. Here's what the method looks like:

def get_single_epoch(self, idx, postprocess=False):
    """
    Get a single epoch.

    Parameters
    ----------
    idx: int
        Index of epoch to extract.
    postprocess: bool
        If True, apply detrending + offset + decim when loading a new epoch
        from raw; also, apply projection if configured to do so.

    Returns 
    -------
    epoch : array of shape (n_channels, n_times)
        The specific window that was extracted.
    """
    assert isinstance(idx, int)
    if self.preload:
        epoch = self._data[idx]
    else:
        epoch = self._get_epoch_from_raw(idx)
        if postprocess:
            epoch = self._detrend_offset_decim(epoch)

    if postprocess and not self._do_delayed_proj:
        epoch = self._project_epoch(epoch)

    return epoch

@hubertjb
Copy link
Collaborator Author

Also, I'm looking at the test you suggested @robintibor, will give an update soon.

@larsoner
Copy link

@hubertjb would you prefer if we fixed this at the MNE end? I enjoy (at least making an attempt at) solving these problems

@hubertjb
Copy link
Collaborator Author

@larsoner that'd be amazing! :) Out of curiosity, what kind of optimization do you have in mind? Would it go through a special method as above, or something else?

@hubertjb
Copy link
Collaborator Author

@robintibor I've made a script to test what you suggested: https://github.com/hubertjb/braindecode/blob/profiling-mne-epochs/test/others/time_training_iteration.py

I haven't tested the Deep4 architecture yet, but with the shallow one I get about 35 ms per minibatch of 256 examples on a Tesla V100.

From the previous test we know a conservative lower bound for getting windows is about 1 ms per window. If we need to apply some transforms (e.g., filtering, normalization, augmentation, etc.) on-the-fly, this will likely go up quite a bit. This means with the shallow architecture we'd be too slow for maximizing GPU usage at this batch size.

Does this fit with the numbers you had previously?

@robintibor
Copy link
Contributor

Why 256 examples? in script it is 64 right?
in any case right now for me it is about having a rough estimation whether an approach may be fast enough and is therefore worth spending time on. These numbers may still be ok I think. Keep in mind in the end pytorch may use multithreading to get multiple at the same time, there may be additional overhead apart from the train step like storing losses etc. So proceeding in this direction seems promising to me at the moment.

@hubertjb
Copy link
Collaborator Author

Sorry, I meant 64! And yep, good point concerning multithreading.

I agree with you, if we can get that close to HDF5 performance with a few modifications to MNE then we're on a good path.

@larsoner
Copy link

Out of curiosity, what kind of optimization do you have in mind? Would it go through a special method as above, or something else?

No I will try to optimize the epochs.get_data itself to avoid whatever the bottleneck is under the hood in MNE

@hubertjb
Copy link
Collaborator Author

No I will try to optimize the epochs.get_data itself to avoid whatever the bottleneck is under the hood in MNE

Ok, cool. If that can be useful, on my end the epochs.copy() in _getitem() seemed to be the culprit.

@larsoner
Copy link

Ahh yes that would make sense. Then in would scale with the number of epochs, etc. I'll see how bad it will be to fix this

@larsoner
Copy link

One possible API would be a new argument item to epochs.get_data as:

out = epochs.get_data(item=0)

To get epoch index 0. You could pass to item whatever you currently pass to epochs[item]. It makes the code much simpler if we require that epochs have bad epochs dropped before allowing this, which at least makes some sense as it ensures you get back out.shape[0] == np.size[item] (for appropriate inputs).

If I

I get this:

timing_results

@hubertjb
Copy link
Collaborator Author

hubertjb commented Feb 1, 2020

Thanks @larsoner, very elegant solution! I think this should satisfy our requirements. I have one question:

I understand that you need to call epochs.drop_bad() before calling epochs.get_data(item=idx). However, if I understand correctly, drop_bad starts by loading the entire data in memory. If that's the case, then this defeats our purpose here as we want to avoid eager loading. If drop_bad() only drops epochs based on signal quality criteria, could we just "accept" any epoch, and in doing so bypass the use of drop_bad()?

@larsoner
Copy link

larsoner commented Feb 1, 2020

drop_bad starts by loading the entire data in memory

Not quite. It loads each epoch one at a time and then discards it from memory (does not keep it around). Is that okay?

@larsoner
Copy link

larsoner commented Feb 1, 2020

If drop_bad() only drops epochs based on signal quality criteria, could we just "accept" any epoch, and in doing so bypass the use of drop_bad()?

It's also based on things like whether or not the entire epoch is actually extractable from raw (e.g., a 10 second epoch from an event occurring one second from the end of the raw instance would be discarded as TOO_SHORT or so.

If it's desirable to just check these limits without actually loading the data we might be able to, but it again would be a larger refactoring.

@hubertjb
Copy link
Collaborator Author

hubertjb commented Feb 1, 2020

drop_bad starts by loading the entire data in memory

Not quite. It loads each epoch one at a time and then discards it from memory (does not keep it around). Is that okay?

Ok, got it. So in our case, I guess we would run drop_bad() only once, before training. This would load all data so might take some time on very large datasets, however since it's done sequentially memory would not be an issue.

I think this should be fine for now. If calling drop_bad() once before training ends up taking too much time we might want to look into other options though.

@robintibor
Copy link
Contributor

yes I agree with @hubertjb that case (loading into and dropping from memory once at start of training) may be fine for now for us, we will see if it causes any practically relevant problems once we have more implemented full examples.

@larsoner
Copy link

larsoner commented Feb 1, 2020

Okay great, let us know of it does indeed end up being a bottleneck

@hubertjb
Copy link
Collaborator Author

hubertjb commented Feb 8, 2020

I started working on an example that shows how to do lazy vs eager loading on real data, and that compares the running times of both approaches (https://github.com/hubertjb/braindecode/blob/lazy-loading-example/examples/plot_lazy_vs_eager_loading.py). It's still not a perfectly realistic scenario - for instance, there is no preprocessing/transforms applied - but it should be better than my previous tests. Here are some preliminary results (they are subject to change)!

So far, using the first 10 subjects of the TUH Abnormal dataset, and training a shallow net for 5 epochs on a normal/abnormal classification task, the breakdown is (average of 15 runs, in seconds):

              data_preparation  training_setup  model_training
loading_type
eager             6.539236        0.003837       18.746278
lazy              9.040427        0.136949       59.257529

Observations:

  1. Data preparation is longer for lazy loading, most likely because of drop_data which loads each window one at a time. This is OK for now but we might want to optimize eventually.
  2. I'm not sure why training setup time is larger for lazy loading, but in the end this is pretty short anyways. To investigate.
  3. Model training time is >3x longer with lazy loading.

Next steps:

  • Look into the num_workers argument of Dataloader to see whether that can help mitigate the overhead of lazy loading
  • Once we agree on an API for on-the-fly transforms (Improving the transforms #72), we can add e.g. a filtering step and see what impact it has on running times

@robintibor
Copy link
Contributor

Great @hubertjb . Seems we are getting to a reasonable training time range. Would also be interesting how big the difference is for Deep4. And as you said, maybe num_workers would already close the gap enough to consider it finished. I would say a gap of 1.5x for deep4 to me is acceptable.

@hubertjb
Copy link
Collaborator Author

Some good news, using num_workers makes lazy loading a lot more competitive (https://github.com/hubertjb/braindecode/blob/lazy-loading-example/examples/plot_lazy_vs_eager_loading.py).

Without (num_workers=0):

              data_preparation training_setup  model_training
loading_type
eager             8.866640        0.003532       25.106193
lazy             12.615696        0.418142       72.939527 

With (num_workers=8):

	      data_preparation  training_setup  model_training
loading_type
eager              8.82214        0.004220       28.750993
lazy              23.60280        0.422488       30.715090 

The two types of loading are much closer now. Also I noticed GPU usage stayed at 100% during training, which means the workers did what they are supposed to do. I'm not sure why data preparation time is so much longer for lazy loading with num_workers=8 though, I'll have to investigate.

Also, @robintibor, I tried with Deep4, keeping all other parameters equal:

              data_preparation  training_setup  model_training
loading_type
eager             8.902757        0.022913       11.306941
lazy             22.113035        0.456256       17.751904

Surprisingly, it seems more efficient than the Shallow net... Is that expected? I haven't played with the arguments much, maybe I did something weird.

@robintibor
Copy link
Contributor

No the behavior is very unexpected, time spent on GPU forward/backward pass should be longer for deep, therefore difference should be smaller. It is also very strange that for you deep is faster than shallow, that should not be.

Before investigating further, please add following line somewhere before your main loop:

torch.backends.cudnn.benchmark = True

(see https://discuss.pytorch.org/t/what-does-torch-backends-cudnn-benchmark-do/5936 for why for our cases it should always be set to True)

Numbers should improve overall. However it does not explain any of the current behavior, as said deep should be slower than shallow, not ~3 times faster (in eager mode)

@hubertjb
Copy link
Collaborator Author

Latest results can be found in #75.

@robintibor
Copy link
Contributor

robintibor commented Mar 2, 2020

@hubertjb I have rerun your perf code after the fix in #89 (which improves preload=True performance quite a lot) for batch_size=64 and cuda=True. These are results (averaged over the 3 repetitions):

preload len model n_work data_prep setup training
0 False 2 deep 0 11.9 0.1 21.7
12 True 2 deep 0 7.5 0.1 2.6
1 False 2 deep 8 11 0.1 11.4
13 True 2 deep 8 7.6 0 3.5
4 False 4 deep 0 16.8 0.2 29.5
16 True 4 deep 0 7.8 0.1 3.3
5 False 4 deep 8 15.4 0.1 13.2
17 True 4 deep 8 7.9 0.1 4.6
8 False 15 deep 0 28.2 0.2 57.1
20 True 15 deep 0 10 0.1 8.9
9 False 15 deep 8 31.5 0.2 25.8
21 True 15 deep 8 9.8 0.1 11.4
2 False 2 shallow 0 10.5 0.1 24
14 True 2 shallow 0 7.5 0 2.2
3 False 2 shallow 8 12.8 0.1 10.8
15 True 2 shallow 8 7.5 0.1 3.6
6 False 4 shallow 0 10.5 0.1 29.2
18 True 4 shallow 0 7.8 0 3.3
7 False 4 shallow 8 14.8 0.1 13.3
19 True 4 shallow 8 7.9 0.1 5.3
10 False 15 shallow 0 31.2 0.2 53.9
22 True 15 shallow 0 10 0.1 12.3
11 False 15 shallow 8 27.2 0.2 26.6
23 True 15 shallow 8 9.9 0.1 15.1

As can be seen for eager num_workers=0 is better whereas for lazy num_workers=8 is better.
Therefore I made a comparison table for these two settings:

win_len_s model_kind train_lazy train_eager ratio
0 2 deep 11.4 2.6 4.4
1 4 deep 13.2 3.3 4
2 15 deep 25.8 8.9 2.9
3 2 shallow 10.8 2.2 4.9
4 4 shallow 13.3 3.3 4
5 15 shallow 26.6 12.3 2.2

I don't know for these win_len settings which one is a realistic one? What do they correspond to? Would be great if we can get all of the ratios to <2 at some point, but good that we have a running implementation in any case

@robintibor
Copy link
Contributor

outdated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants