-
Install: pip install -U bert-serving-server bert-serving-client
- The server MUST be running on Python >= 3.5 with Tensorflow >= 1.10
- Download a bert model: SMALL = https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip LARGE = https://storage.googleapis.com/bert_models/2018_10_18/cased_L-24_H-1024_A-16.zip
- Unzip to a directory to point to later (Suggested inside venv folder)
-
Install other needed libraries and download data
- pip install keras_metrics keras pandas numpy sklearn
-
Serve Model:
- bert-serving-start -model_dir ~/bert/models/cased_L-12_H-768_A-12/ -num_worker=2 -max_seq_len=250 num_workers can be used to increase num_workers (Model is loaded n-times) max_seq_len specifies the largest entry size to the bert model (suggest 250 for concept work)
-
Notifications
You must be signed in to change notification settings - Fork 6
jes-moore/bertVersusAll
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
A project to compare Bert versus a encode-embed-attend-predict architecture for NLP Classification
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published