Skip to content
This repository has been archived by the owner on Nov 22, 2022. It is now read-only.

Allow model to take byte-level input and make byte-level prediction #1187

Closed
wants to merge 1 commit into from

Conversation

FanW123
Copy link

@FanW123 FanW123 commented Dec 10, 2019

Summary:
This diff creates a BiteLM model to take byte-level input and produce byte-level output.

  1. Add BYTE_BOS and BYTE_EOS to ByteTensorizer as they are needed in language model
  2. Create a byte_lm_output_layer

Reviewed By: kmalik22

Differential Revision: D18834414

Summary:
This diff creates a BiteLM model to take byte-level input and produce byte-level output.
1. Add BYTE_BOS and BYTE_EOS to ByteTensorizer as they are needed in language model
2. Create a byte_lm_output_layer

Reviewed By: kmalik22

Differential Revision: D18834414

fbshipit-source-id: 26b5872bf92ba49a9308e64f64ccb10c2bade721
@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels Dec 10, 2019
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D18834414

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in d40272d.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants