Skip to content

A simple app, with an embedded NoSQL backend, for browing, quering and filtering 1 Million books

Notifications You must be signed in to change notification settings

dmolin/techtest-books

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TechTest - A simple Books browsing/querying with 1 Million records

Screenshots

ScreenShot

The Application is also LIVE here

Technologies Used

  • React 15.4
  • Redux
  • BabelJS
  • Webpack
  • Express
  • Node Fibers
  • TingoDB (embedded NoSQL datastore)
  • Mocha/Chai/Sinon (the usual suspects) for a glimple of testing

Features

  • It does what it states on the tin: it allows you to browse (by category and author gender) through an embedded database with 1 Million books
  • Includes a generator script that allows you to create a new Database with random generated content (author names, book titles, rating...)
  • Funny autogenerated 'possessive' book titles!
  • High performing books (rating = 5) are highlighted with a "heart" tag at the top right of the book card
  • Books published on "odd" days of the week are highlighted with a special ribbon at the bottom left of the book card
  • Express server with open API (open the browser and just hit http://localhost:8080/api/category/all as an example)
  • basic responsive interface (I didn't spend too much on this)
  • Use of the cool Semantic-ui
  • definitely NOT production ready :p

Things still in the works and other stuff in the queue

  • Book detail page
  • Adding/displaying book reviews
  • Wish list
  • Purchase flow and shopping cart

How to run this project

I'm assuming that node and npm are already installed in the development machine. After cloning the project, just run npm install:

npm install

You're almost ready to go! You can start the server just running the following command:

npm start

This will run the unit tests and start the server. At startup, the server will access the embedded NoSQL datastore and index all the books. This might take some time with a 1 Million books DB (around 3 minutes). For this reason the project comes with a pre-generated database with only 10000 records, that keeps the indexing time down to a few seconds.

Wait! where are my 1 Million books then?

No worries. If you want to enjoy browing through 1 Million books you have two choices:

  1. head here and enjoy browsing through 1M fake books
  2. you can just generate a new DB! The project comes with a generator script.

Just run it:

npm run generate <number of books to generate>

If the number of books to generate is not provided the script will generate 1.000.000 books. In that case...take a cup of coffee. the generation should take about 3-5 minutes (on a fairly recent machine).

After that, you can start the server again; The indexing process at startup will now take much more time (the aforementioned 2-3 minutes) and then you'll be ready to fly

ScreenShot ScreenShot

Why a Backend? couldn't you make it with just an index.html and js code?

Well, no. Ever tried to generate a json file containing 1 Million books? I did and I got a 264Mbyte file. No way I can load this into a browser and in memory. It would never model a real case scenario. Moreover, try to filter a million book list in memory in the browser... I tried just for fun and it took me 35 seconds to paginate LOL

In general, a paginated list (or a 'load more' one) is a much more realistic scenario and it forms a much more satisfying user experience.

So, a backend is necessary to query, filter and paginate the books. I initially tried with a simple JSON file, loaded in memory in the server process and served to clients. It didn't work out quite well, since each pagination was requiring more than 30 seconds to process.. I then tried optimizing, creating 1 master json file (1M records) PLUS "n" other json files, one for eact category type, to avoid filtering out the main list when requesting a category. It's ok and quite performant but everything broke when I increased the size of the master JSON file (when adding author/cover images). So I decided to refactor and move everything to a NoSQL database.

OK, which DB then?

I didn't want to over complicate things including the constraint to install an external NoSQL database, so I opted for an embedded one. When googling around I found tingodb and I have to say I'm pretty impressed! It works and pretty well. it sacrifices a bit of space when storing data into the file-based DB (almost 2x compared to a plain JSON file) but, once indexes are created in memory, it's pretty performant!

The only reason why I didn't commit the 1M records db into the project is plain and simple: its size is about 460Mbytes ;) So, the project comes with a pre-generate 10.000 records DB. if you want to test with 1M, just generate one:

npm run generate

And then start the server again and have fun!

About

A simple app, with an embedded NoSQL backend, for browing, quering and filtering 1 Million books

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published