Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[not issue, only questions] Is this tech "one neural nets for all" or one neural nets for capita? #1

Open
YagaoDirac opened this issue Nov 17, 2022 · 3 comments

Comments

@YagaoDirac
Copy link

This is only questions.
I'm not professional on deep learning. I didnt read the paper.
I have 2 questions.
1, is this technologe "one neural nets for all" or one neural nets for capita?
If this is a one for all tech, it means, if I buy some gear to read my brain no matter it's an inplant(like Neuralink) or helmet(a lot prototypes we can find on youtube), all I need to do is download this neural nets. Then everything is good to go.
If this is a per capita tech, it means, I have to train a neural nets for my self, and I have to fine tune it periodically. If I change to another gear, I probably need to do all the jobs again.

2, have you test this with some artists who work on photo-realistic digital painting?
I've trained on this area for like 2,3 years. I believe this hard training change the way my brain handle vision stimuli a lot. General people can't handle the rotation in vision, this is because our brain prefer to handle tokens to shapes. Tokens don't have directions, or even precise length or ratio. You can find some training tool in my github which helps with such trainings, I had a hard time training on these, really hard.

Let me provide a suggestion. You can test with some basic geometry shapes. This could probably provide the idea how brain deal with simple images, and basic elements. Shape, count, color, texture, ratio, relative pos.

@tobyxdd
Copy link

tobyxdd commented Nov 17, 2022

I'm afraid this is not as advanced as you might think.

First of all this only reads what a person's eyes are currently seeing, not what they are "picturing" in mind. Secondly, these models rely heavily on pre-trained images and I seriously doubt they can handle any content significantly different from the training set. And lastly, yes, you must re-train the model for each individual.

image

@hardoc
Copy link

hardoc commented Nov 17, 2022

I think you're being overly pessimistic @tobyxdd

Reading their documentation and trying the code out on publicly available datasets the code seems to work much better than previous solutions.

they even specifically mention that they tried to solve overfitting problems and tested on multiple datasets for validation.

The training data is diverse enough to be a good generalization for general- human vision.

However cross-person training still not being possible is a real problem , I would still say full training isn't needed for this to work on another person , just fine-tuning that would take less than 5% of the initial training time , judging by how this is built up.

@YagaoDirac
Copy link
Author

YagaoDirac commented Jan 9, 2023

And lastly, yes, you must re-train the model for each individual.

Thanks. I asked this detail because I'm curious what still would be needed to be done after the surgery to have a neuralink or anything similar. It seems no matter what the surgery can give me, I still have to do a lot per individual job to get it to work for "me", which may limit the usage of such inplants.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants