Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to dynamically change weighted connections between training examples #40576

Open
MoyoT opened this issue Jun 18, 2020 · 0 comments
Open
Assignees
Labels
comp:keras Keras related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests

Comments

@MoyoT
Copy link

MoyoT commented Jun 18, 2020

Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template

System information

  • TensorFlow version (you are using): 2.0
  • Are you willing to contribute it (Yes/No): No

Describe the feature and the current behavior/state.
I would like to be able to change the routing of the weighted connections between training examples. I have developed the theory for a type of neural network that takes in as input a random vector of ints between 0 and 10 (and maps it to an image). then takes these ints and after adding 200 multiples of 10 for each dimension, i use this result as the index in tf.gather with the default weight matrix which is of shape (10,200). The reason we add multiples of 200 is because the first hidden layer will have 200 neurons and we want each possible number in the input to come with its own weighted connections. So between examples we will be loading different weighted connections into the weight matrix of the layer dependant on the input.

if two dimesnions happen to have the same numebr they will both load in the same connections in their respective dimensions. On top of this, to make each entry in the dimension unique we employ a circulant routing where the first neuron enters its entries in root position, while the next one rotates them by one, then the next one rotates them by 2 and so on. So a 2 in the nth position will have a different effect on the next layer from a 2 in the nth +1 position.

Additionally, Once the input layer is arranged dynamically we mirror the form within other layers that inherit from the input layer. So if there is a 3 in the nth position of the first layer. The next layer will have "3" weighted connections in its nth position. Note that these are not the same weighted connections that the first layer had , but a corresponding vector of weighted connections that go with it , but for the next layer. (we could not use the input into that layer to order its connections because tf._gather requires indices that are ints).

i would also like to somehow do this for convolutional layers too if the logic permits.

Here is some code that attempted to do all of this and a link to a colab file.

Will this change the current api? How?
I am not sure if it will because i have been trying to do it with no success

Who will benefit with this feature?

this is a new generative algorithm that will benefit the AI community in general.

Any Other info.
https://colab.research.google.com/drive/1mWAQH1jJMFJ0NTKrqLxzet47aUXm9wWz?usp=sharing

@MoyoT MoyoT added the type:feature Feature requests label Jun 18, 2020
@Saduf2019 Saduf2019 added the comp:keras Keras related issues label Jun 19, 2020
@Saduf2019 Saduf2019 assigned gowthamkpr and unassigned Saduf2019 Jun 19, 2020
@gowthamkpr gowthamkpr assigned omalleyt12 and unassigned gowthamkpr Jun 22, 2020
@gowthamkpr gowthamkpr added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jun 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:keras Keras related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

4 participants