Dead simple search engine for huge a text file (> 1 million lines), powered by elasticsearch.
Usually used to search for files by names if you have million of files in file system.
git clone https://github.com/soruly/search.git
Copy .env.example to .env, update the PORT if you need to
docker-compose up -d
For example, generate a filename list
find . > list.txt
curl -X POST -H "Content-Type: text/plain" --data-binary @list.txt http://127.0.0.1:8001/update
Index update usually takes 15-30 seconds for 1 million records
Note: existing index would be wiped every time on update
Now open the webpage http://127.0.0.1:PORT
You can schedule a cron job to periodically update the index
0 * * * * find . > list.txt && curl -X POST -H "Content-Type: text/plain" --data-binary @list.txt http://127.0.0.1:PORT/update