ImageSub is web app project using Recurrent Neural Network. This project based on NeuralTalk2 by [Andrej Karpathy] (https://github.com/karpathy). I modify the project in order to interact with the user via a web application. So Users can upload pictures or photographs that they want to identify its caption. If you prefer using Docker and Restful version, you can using neuraltalk2-web : https://github.com/jacopofar/neuraltalk2-web
This is an early code release that works great but is slightly hastily released and probably requires some code reading of inline comments (which I tried to be quite good with in general). I will be improving it over time but wanted to push the code out there because I promised it to too many people.
- See original instruction here : (https://github.com/karpathy/neuraltalk2/blob/master/README.md)
- Instructions in Bahasa Indonesia : https://hynra.com/post/aplikasi-web-image-caption-menggunakan-neural-network-part-2/
Now you need to clone or download this project into your machine and open open
vis/app.py, and edit line :
subprocess.call('th eval.lua -model ../Public/model_id1-501-1448236541_cpu.t7 -image_folder '+folder_path+' -num_images 1 -result_folder vis/'+dir, shell=True, cwd="../")
-model is where your model located.
Run python server
$ cd vis $ python app.py
localhost:8000 in your browser.
"I only have CPU". Okay, in that case download the cpu model checkpoint. Make sure you add
-gpuid -1 in
vis/app.py to tell the script to run on CPU.