-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load into Memory #1
Comments
@myoldusername I have just updated the library with several improvements for the child process run. I have also added a |
Upstanding.... I will test it today since i am out of town, today i will give you a feedback. You are awesome...... |
It is working as expected , Can i send you a donation ? |
Some times it crach when i pass the string text if the string is unicode, like Chinese I advice you to add normalization method to remove all non characters, e.g all special characters and smiles characters... |
@myoldusername yes this is a good point there are minor functions in utils like and the dataset is normalized in but of course for symbolics languages it's different, since it must be handled with Unicode i.e. unicode conversion and normalization before prediction. In my backend I do unicode normalization in Java, but here I would prefer a |
Well i am working with language classification training set which provide by fastText with respect to them. I use to pass some languages paragraphs to the localhost url it works, but some time it suddenly crashed even with normalized strings.. I am not sure i will make farther test to see if my copy paste string has some hidden characters.. Since unicode has some nasty stuff lol. Regarding node solution, i think it will be an awesome idea to apply. With respect. Yours |
Yes this could be a very tricky task when dealing with languages that needs Unicode. By the way I'm using the same model too, so I have added the compressed version of the model in the example, and some env var so that you can go:
and then that will be correctly detected as KO: {
"response_time": 0.001,
"predict": [{
"label": "KO",
"score": "1"
},
{
"label": "TR",
"score": "1.95313E-08"
}
]
} NOTE |
Well i like to bring to your attention that sometime when i pass a regular string, for unknown reasons the node server file freeze and i have to kill it and restart it again.. |
@myoldusername put here that text and the url as cut&paste from the browser |
uhm I guess you have some issues in your env:
you therefore call http://localhost:3030/?text=bader and you get: {
response_time: 0.002,
predict: [{
label: "EN",
score: "0.125931"
},
{
label: "CA",
score: "0.0847617"
}
]
} This should work without any issues:
and now we do some benchmarking as well calling 1, 10 and 100 times iteratively:
|
I have added here some benchmarkes therefore I'm closing this issue. Feel free to re-open it if you have any problem. |
Dear @loretoparisi
I installed your fasttext.js in order to solve memory problem that we discus about in facebookresearch/fastText#276 (comment)
Now when i run :
node fasttext_predict.js
it take like 5 sec to load the module,
and It return to stdout the prediction and exit , due to
fastText.unload();
Now i need to call this file "
node fasttext_predict.js
UserName
" from any place passing some args [UserName] to it and return to the stdout the result directly , since you saide it will be loaded into memory , in order to be able to get this result from the php webserver.It is the same problem with the C++ file loading , i need it to be run in the background !
The text was updated successfully, but these errors were encountered: