The Application is also LIVE here
- React 15.4
- Redux
- BabelJS
- Webpack
- Express
- Node Fibers
- TingoDB (embedded NoSQL datastore)
- Mocha/Chai/Sinon (the usual suspects) for a glimple of testing
- It does what it states on the tin: it allows you to browse (by category and author gender) through an embedded database with 1 Million books
- Includes a generator script that allows you to create a new Database with random generated content (author names, book titles, rating...)
- Funny autogenerated 'possessive' book titles!
- High performing books (rating = 5) are highlighted with a "heart" tag at the top right of the book card
- Books published on "odd" days of the week are highlighted with a special ribbon at the bottom left of the book card
- Express server with open API (open the browser and just hit http://localhost:8080/api/category/all as an example)
- basic responsive interface (I didn't spend too much on this)
- Use of the cool Semantic-ui
- definitely NOT production ready :p
- Book detail page
- Adding/displaying book reviews
- Wish list
- Purchase flow and shopping cart
I'm assuming that node and npm are already installed in the development machine. After cloning the project, just run npm install:
npm install
You're almost ready to go! You can start the server just running the following command:
npm start
This will run the unit tests and start the server. At startup, the server will access the embedded NoSQL datastore and index all the books. This might take some time with a 1 Million books DB (around 3 minutes). For this reason the project comes with a pre-generated database with only 10000 records, that keeps the indexing time down to a few seconds.
No worries. If you want to enjoy browing through 1 Million books you have two choices:
- head here and enjoy browsing through 1M fake books
- you can just generate a new DB! The project comes with a generator script.
Just run it:
npm run generate <number of books to generate>
If the number of books to generate is not provided the script will generate 1.000.000 books. In that case...take a cup of coffee. the generation should take about 3-5 minutes (on a fairly recent machine).
After that, you can start the server again; The indexing process at startup will now take much more time (the aforementioned 2-3 minutes) and then you'll be ready to fly
Well, no. Ever tried to generate a json file containing 1 Million books? I did and I got a 264Mbyte file. No way I can load this into a browser and in memory. It would never model a real case scenario. Moreover, try to filter a million book list in memory in the browser... I tried just for fun and it took me 35 seconds to paginate LOL
In general, a paginated list (or a 'load more' one) is a much more realistic scenario and it forms a much more satisfying user experience.
So, a backend is necessary to query, filter and paginate the books. I initially tried with a simple JSON file, loaded in memory in the server process and served to clients. It didn't work out quite well, since each pagination was requiring more than 30 seconds to process.. I then tried optimizing, creating 1 master json file (1M records) PLUS "n" other json files, one for eact category type, to avoid filtering out the main list when requesting a category. It's ok and quite performant but everything broke when I increased the size of the master JSON file (when adding author/cover images). So I decided to refactor and move everything to a NoSQL database.
I didn't want to over complicate things including the constraint to install an external NoSQL database, so I opted for an embedded one. When googling around I found tingodb and I have to say I'm pretty impressed! It works and pretty well. it sacrifices a bit of space when storing data into the file-based DB (almost 2x compared to a plain JSON file) but, once indexes are created in memory, it's pretty performant!
The only reason why I didn't commit the 1M records db into the project is plain and simple: its size is about 460Mbytes ;) So, the project comes with a pre-generate 10.000 records DB. if you want to test with 1M, just generate one:
npm run generate
And then start the server again and have fun!