Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add external memory network demo #696

Closed

Conversation

Haichao-Zhang
Copy link
Contributor

This PR includes an example implementation of an external memory network and example usage with a simple task.

@Haichao-Zhang Haichao-Zhang force-pushed the ext_mem_demo branch 2 times, most recently from e23f54e to 1dddec7 Compare December 2, 2016 00:58
@Haichao-Zhang Haichao-Zhang changed the title add external memory demo add external memory network demo Dec 2, 2016
self.name = name
self.mem_slot_size = mem_slot_size
self.mem_fea_size = mem_fea_size
self.scale = 5
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.scale = scale

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been corrected.

self.scale = 5
self.external_memory = memory(name=self.name,
size=mem_fea_size*mem_slot_size,
boot_bias= ParamAttr(initial_std=0.01,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bad indent

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been updated.

bias_attr = False,
act = SoftmaxActivation(),
size = self.mem_slot_size,
name='read_weight')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in order to avoid name confict when using multiple memory, this and other names should be prefixed by self.name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar issues have been addressed.


return memory_output

def MakeConstantVector(self, vec_size, value, dummy_input):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

python naming convention: make_constant_vector

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the function name following the convention.

memory_removed = mixed_layer(input = [identity_projection(input=self.external_memory),
identity_projection(input=memory_remove_neg)],
bias_attr = False,
act = LinearActivation())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

78 and 81 can be combinded as written as: memory_removed = self.external_memory - memory_remove.
See https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/tests/configs/math_ops.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part of the code has been updated using math_ops

print_layer(input=[erase_vec])
print_layer(input=[add_vec])

out_prod = out_prod_layer(norm_cosine_similarity_write, erase_vec, name="outer")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating a constant vector erase_vec for this is very ugly. A nicer way to do this is to enhance "repeat" layer to allow repeat in both directions, similar to "repmat" in matlat.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking into the repeat layer currently.


out_prod_add = out_prod_layer(norm_cosine_similarity_write, add_vec, name="outer_add")

memory_output = mixed_layer(input = [identity_projection(input=memory_removed),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using addto_layer can make this looks simpler.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switch to addto_layer

from paddle.trainer_config_helpers import *


class ExternalMemory(object):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to comment for the class, and its member functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments have been added to both the class and member functions.

@luotao1
Copy link
Contributor

luotao1 commented Feb 1, 2019

感谢您给PaddlePaddle贡献代码。由于Paddle V1/V2版本已不再维护,相关代码也已从develop分支上删除,因此关闭您的PR,欢迎您向Paddle最新版-Fluid贡献代码。
Thanks for contributing to PaddlePaddle! Since V1/V2 will not be maintained anymore, and related codes have been deleted from develop branch as well, we close this PR. Welcome to contribute to Fluid——the latest version of PaddlePaddle.

@luotao1 luotao1 closed this Feb 1, 2019
zhhsplendid pushed a commit to zhhsplendid/Paddle that referenced this pull request Sep 25, 2019
)

* add api_guides low_level backward parameter program_en

* Apply suggestions from code review

Co-Authored-By: zy0531 <48094155+zy0531@users.noreply.github.com>

* Apply suggestions from code review

Co-Authored-By: zy0531 <48094155+zy0531@users.noreply.github.com>

* Update backward_en.rst

* Update parameter_en.rst

* Update program_en.rst

* Update doc/fluid/api_guides/low_level/program_en.rst
yaozhixin pushed a commit to graphcore/Paddle-fork that referenced this pull request May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants