-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some help with reproduction #1
Comments
Small update @935963004, I think we solved the issue with a number of chans, and positional embedding.. (I opened a PR). We are still facing problems with |
I think you should set use_abs_pos_emb=True, use_rel_pos_bias=False, use_shared_rel_pos_bias=False. Does this work for you? |
Yes @935963004, with PR #2 it will work, but I don't really understand the reason for having options that aren't used anywhere. And I am not sure if the modification are okay for you. Another thing I was thinking about is how it is building the patch... Now there is no patch construction within the network, i.e. the network already expects the input [batch, n_chans, num_patch, patch size], why this it is not learned during the train, as in the ViT or BIET (1, 2 or 3)? I really appreciate your input on this! 🙏 |
The input x is [batch, n_chans, num_patch, patch size]. In the TemporalConv, x is first transformed to [batch, n_chans * num_patch, patch size]. Then, for using the torch.nn.Conv2d(), x is unsqueezed to [batch, 1, n_chans * num_patch, patch size], where 1 is the in_chans, just like rgb for images, so it is fixed and can not be changed. After several convolutional layers, x will be transformed back to [batch, n_chans * num_patch, patch size], which can be passed into the Transformer encoder as input. Can this explanation help you? |
@935963004 Is the input channels always expected to be 1 ? Because, in the code the TemporalConv is only called for in_chan =1. |
Exactly |
@935963004 I am a bit confused, what about multi-channel EEGs? |
The input x [batch, n_chans, num_patch, patch size] is multi-channel EEG. in_chan and n_chans are two things. in_chan is just for convolution operation thus it is set to 1 (actually we just reshape the original input from [batch, n_chans, num_patch, patch size] to [batch, 1, n_chans * num_patch, patch size]), while n_chans is the number of electrodes for multi-channel EEG. |
@935963004 Thank you very much! @bruAristimunha I guess we are good without my changes then. |
Ok, thanks @935963004 and @RashikShahjahan! Last thing for me, I was wondering, can you please clean up the code a little or put a doc string inside the model? The names of the variables within the model are not super obvious, and I'm pretty sure it will lead other users to open more issues or send emails to you or to the rest of the authors. I truly understand and empathize with all the effort you've made with your model, and also understand that during development some decisions are not always optimized. however, I would like to thank you in advance for any effort you can make to ensure a more easy reproduction. Have a nice day! |
I'm sorry for the inconvenience and I appreciate your suggestion. I will add some annotations for better understanding in the following days. |
Hey @935963004, I have some more questions for you: In the temporal embedding, you define the temporal embedding with a space of 16 items, what is the reason for choosing this number? I couldn't find it anywhere in the code, or paper. It looks like you've always had the same number of patches, is this correct? It seems like it's linked to the number of patches, but I'm not sure. The same question for position embedding. It seems like there are always 128 positions, I couldn't understand the math to arrive at these numbers. https://github.com/935963004/LaBraM/blob/main/modeling_finetune.py#L283 |
These numbers are set to meet the maximum requirements of our paper. In fact, you can set them to any number if you like as long as they meet your maximum requirements. |
Hi all, I'm also facing problems with reproduction. |
There are various ways for you to implement with your own dataset. Just make sure the dataloader and ch_names fit our implementation. You can refer to run_class_finetuning.py and replace the get_dataset function with your own one. |
I'm getting an error on the positional embedding when using the default settings. pos_embed_used = self.pos_embed[:, input_chans] if input_chans is not None else self.pos_embed |
I think you should set abs_pos_emb to True in args. You are recommended to use the provided script in README: |
Have you made any progress, in terms of processing your own dataset? |
Yes! With some tweaks to the provided script I was able to process my own dataset. I was working on the MindBigData dataset but unfortunately I was not able to get good results for the task in the dataset. I think it could be due to insufficient signals in the dataset for the task, or that the embeddings were not suitable for the dataset. I was only able to get about 30+% accuracy in a 10-class classification. Better than pure chance but not good enough for anything major I think. |
I think this may be because the raw model doesn't involve your own tasks, so the accuracy leaves something to be desired, try pre-training with your own tasks. What should I do to input my own data into the original model? Take the cnt file to do the categorization think for example. |
您好,可以告诉我一下您是如何调整的吗? |
我是先把自己的数据集转换成Signal和label然后自己定义了一些新的Dataloader
|
test_dataset = MIND2BLoader(
|
可以了解你一下你使用的数据标签类型是怎么设置的吗,是从0开始的吗(我的标签是四种 1 2 3 4) |
是的 0 到 9
…On Wed, May 1, 2024, 10:59 AM upper127 ***@***.***> wrote:
可以了解你一下你使用的数据标签类型是怎么设置的吗,是从0开始的吗(我的标签是四种 1 2 3 4)
—
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BGTHY26HY7Q4PG6AN4N4HL3ZABLALAVCNFSM6AAAAABDPQ77YCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBXHEYDGNBXGA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hello @935963004,
I would like to starting say thank you for your work, I think it is a fundamental and necessary work in EEG decoding. Thank you for that!
So, I am trying to understand and run your code, but some things are not working, and I would like to request your assistance. From the beginning, with a toy example.
My questions are:
"(batch, channel, time_steps)"
In my naive intuition if I change the in_chans everything should working because of the TemporalConv module, but it's not.
FYI @LemonFace0309, @jonxuxu and @shahbuland, @RashikShahjahan
All the best!
The text was updated successfully, but these errors were encountered: