diff --git a/chapter10/chapter10.md b/chapter10/chapter10.md index e2698f4c..6e2ff0fe 100755 --- a/chapter10/chapter10.md +++ b/chapter10/chapter10.md @@ -325,7 +325,7 @@ When things go south (e.g., memory leaks, overloads, crashes), there are two thi Monitoring ---------- -When going to production, software and development operations engineers need a way to get current status quickly. Having a dashboard or just an end point that spits out JSON-formatted properties is a good idea, including properties such as the following: +When going to production, software and development operations engineers need a way to get current status quickly. Having a dashboard or just an endpoint that spits out JSON-formatted properties is a good idea, including properties such as the following: - `memoryUsage`: memory usage information - `uptime`: number of seconds the Node.js process is running diff --git a/chapter2/chapter2.md b/chapter2/chapter2.md index 6b2eccd9..76de7802 100755 --- a/chapter2/chapter2.md +++ b/chapter2/chapter2.md @@ -906,3 +906,5 @@ Nothing fancy so far, but it's worth pointing out that it took us just a few # Summary In this chapter we learned what Express.js is and how it works. We also explored different ways to install it and use its scaffolding (command-line tool) to generate apps. We went through the Blog example with a high-level overview (traditional vs. REST API approaches), and proceeded with creating the project file, folders, and the simple Hello World example, which serves as a foundation for the book's main project: the Blog app. And then lastly, we touched on a few topics such as settings, a typical request process, routes, AJAX versus server side, Pug, templates, and middleware. + +In the next chapter we'll examine an important aspect of modern web development and software engineering: test-driven development. We look at the Mocha module and write some tests for Blog in true TDD/BDD style. In addition, the next chapter deals with adding a database to Blog routes to populate these templates, and shows you how to turn them into working HTML pages! diff --git a/chapter4/chapter4.md b/chapter4/chapter4.md index 079de03d..5f61fa40 100755 --- a/chapter4/chapter4.md +++ b/chapter4/chapter4.md @@ -1159,11 +1159,11 @@ app.listen(3000, () => { As usual, the source code is in the GitHub repository, and the snippet is in the `code/ch4/consolidate` folder. -For more information on how to configure Express.js settings and use Consolidate.js, refer to the still-up-to-date book on Express.js version 4—Pro Express.js book (Apress, 2014). +For more information on how to configure Express.js settings and use Consolidate.js, refer to the still-up-to-date book on Express.js version 4—*Pro Express.js* (Apress, 2014), which is available on all major book stores, and of course at . ## Pug and Express.js -Pug is compatible with Express.js out of the box (in fact, it's the default choice), so to use Pug with Express.js, you just need to install a template engine module (`pug`) () and provide an extension to Express.js via the `view engine` setting.). +Pug is compatible with Express.js out of the box (in fact, it's the default choice), so to use Pug with Express.js, you just need to install a template engine module (`pug`) () and provide an extension to Express.js via the `view engine` setting. For example, in the main Express server file we set the `view engine` setting as `pug` to let Express know which library to use for templates: @@ -1173,7 +1173,7 @@ app.set('view engine', 'pug') Of course, developers need to install the `pug` npm module into their project so the pug package is stored locally in `node_modules`. Express will use the name `pug` provided to `view engine` to import the `pug` package and *also* use the `pug` as a template files extension in the `views` folder (`views` is the default name). -**Note** If you use `$ express ` command-line tool, you can add the option for engine support, i.e., `–e` option for EJS and –H for Hogan. This will add EJS or Hogan automatically to your new project. Without either of these options, the `express-generator` (versions 4.0.0-4.2.0) will use Pug. +**Note** If you use the `$ express ` command-line tool, you can add the option for engine support, i.e., the `–e` option for EJS and –H for Hogan. This will add EJS or Hogan automatically to your new project. Without either of these options, the `express-generator` (versions 4.0.0-4.2.0) will use Pug. In the route file, we can call the template—for example, `views/page.pug` (the `views` folder name is another Express.js default, which can be overwritten with the `view` setting): @@ -1196,14 +1196,16 @@ Next, let's cover the Express usage for Handlebars. Contrary to Pug, the Handlebars library from doesn't come with the `__express` method, but there are a few options to make Handlebars work with Express.js:). -- [`consolidate`](https://www.npmjs.com/package/consolidate) (): a Swiss-army knife of Express.js template engine libraries (shown above) -- [`hbs`](https://www.npmjs.com/package/hbs) (): wrapper library for Handlebars -- [`express-handlebarss`](https://www.npmjs.com/package/express-handlebars) () a module to use Handlebars with Express +- [`consolidate`](https://www.npmjs.com/package/consolidate) (): A Swiss-army knife of Express.js template engine libraries (shown in one of the previous sections) +- [`hbs`](https://www.npmjs.com/package/hbs) (): Wrapper library for Handlebars +- [`express-handlebarss`](https://www.npmjs.com/package/express-handlebars) (): A module to use Handlebars with Express -Here's how we can use `hbs` approach (extension `hbs`). Inside of the typical Express.js app code (i.e., configuration section of the main file that we launch with the `$ node` command) write the following statements: +Here's how we can use the `hbs` approach (extension `hbs`). Somewhere in the configuration section of the main Express file (file that we launch with the `$ node` command), write the following statements: ```js +// Imports app.set('view engine', 'hbs') +// Middleware ``` Or, if another extension is preferable, such as `html`, we see the following: @@ -1225,16 +1227,16 @@ Good. Now we can put our knowledge to practice. # Project: Adding Pug Templates to Blog -Last, we can continue with Blog. In this section we add main pages using Pug, plus add a layout and some partials: +Lastly, we can continue with Blog. In this section we add main pages using Pug, plus we add a layout and some partials: -- `layout.pug`: global app-wide template -- `index.pug`: home page with the list of posts -- `article.pug`: individual article page -- `login.pug`: page with a login form -- `post.pug`: page for adding a new article -- `admin.pug`: page to administer articles after logging in +- `layout.pug`: Global app-wide template +- `index.pug`: Home page with the list of posts +- `article.pug`: Individual article page +- `login.pug`: Page with a login form +- `post.pug`: Page for adding a new article +- `admin.pug`: Page to administer articles after logging in -Because the templates in this mini-project require data, we'll skip the demo until the chapter 5 where we'll plug in the MongoDB database. So the source code for the Pug templates is exactly the same as in the ch5 folder of the GitHub repository practicalnode: . Feel free to copy it from there or follow the instructions below. +Because the templates in this mini-project require data, we'll skip the demo until Chapter 5, where we'll plug in the MongoDB database. So the source code for the Pug templates is exactly the same as in the `code/ch5` folder of the GitHub repository `azat-co/practicalnode`: . Feel free to copy it from there or follow the instructions to implement listed below in this section. ## layout.pug @@ -1251,7 +1253,7 @@ html head ``` -The title of the each page is provided from the `appTitle` variable (aka, local): +The title of the each page is provided from the `appTitle` variable (a.k.a., local): ```pug title= appTitle @@ -1269,13 +1271,13 @@ Then, in the `head` tag, we list all the front-end assets that we need app-wide meta(name="viewport", content="width=device-width, initial-scale=1.0") ``` -The main content lives in `body` which has the same level indentation as `head`: +The main content lives in `body`, which has the same level indentation as `head`: ```pug body ``` -Inside the body, we write an ID and some classes for the styles that we'll add later: +Inside the body, we write an id and some classes for the styles that we'll add later: ```pug #wrap @@ -1305,21 +1307,21 @@ Menu is a partial (i.e., an include) that is stored in the `views/includes` fold include includes/menu ``` -In this block named `alert`, we can display messages for users so let's use special alerty classes on a `div`: +In this block named `alert`, we can display messages for users, so let's use special alerty classes on a `div` (the indentation is preserved to show hierarchy): ```pug block alert div.alert.alert-warning.hidden ``` -Main content goes in this block. It is empty now because other template will define it. +Main content goes in this block. It is empty now because other template will define it: ```pug .content block content ``` -Lastly, the footer block with `contaner` class `div` and `p` with text and a link (link is wrappen in text) looks as follows: +Lastly, the footer block with `div` with the `container` class and with `p` with text and a link (link is wrapped in text) looks as follows: ```pug block footer @@ -1331,7 +1333,7 @@ Lastly, the footer block with `contaner` class `div` and `p` with text and a lin | . ``` -To give you a full picture as well as preserve proper indentation (which is PARAMOUNT in Pug), the full code of `layout.pug` is as follows: +To give you a full picture as well as preserve proper indentation (which is *PARAMOUNT* in Pug), the full code of `layout.pug` is as follows: ```pug doctype html @@ -1386,7 +1388,7 @@ block page - var menu = 'index' ``` -Of course, we need to overwrite `content` block. Ergo, the main content with the list of articles that comes from `locals` iterates over the blog posts (articles). Each article link has a title and needless to say a URL which is formed by the `article.slug` value. When there's no posts/articles, then we show a message that nothing has been published yet. The code is as follows: +Of course, we need to overwrite the `content` block. Ergo, the main content with the list of articles that comes from `locals` iterates over the blog posts (articles). Each article link has a title and, needless to say, a URL that is formed by the `article.slug` value. When there are no posts/articles, then we show a message that nothing has been published yet. The code is as follows: ```pug block content @@ -1401,7 +1403,7 @@ block content a(href="/articles/#{article.slug}")= article.title ``` -For your reference and the ease of comprehension Pug's style, the full code of `index.pug` is as follows. You can see `extends` and two block overwrites (of `layout`): +For your reference and to show the ease of comprehension in Pug's style, the full code of `index.pug` is as follows. You can see `extends` and two block overwrites (of `layout`): ```pug extends layout @@ -1424,13 +1426,13 @@ Figure 4-4 shows how the home page looks after adding style sheets. ![alt](media/image4.png) -Phew. Next is the actual blog posts (a.k.a. article). +***Figure 4-4.** The home page of Blog shows menu and the titles of the published articles* -***Figure 4-4.** The home page* +Phew. Next is the page for the actual blog posts/articles. ## article.pug -The individual article page (Figure 4-5) is relatively unsophisticated because most of the elements are abstracted into `layout.pug`. We only have `extends` and then overwrite the `content` block without article title (h1 heading) and article's text (p for paragraph). +The individual article page (Figure 4-5) is relatively unsophisticated because most of the elements are abstracted into `layout.pug`. We only have `extends` and then overwrite the `content` block without the article title (`h1` heading) and article's text (`p` for paragraph). ```pug extends layout @@ -1441,17 +1443,19 @@ block content p= text ``` -This is the awesomeness which we get thanks to Twitter Bootstrap and h1 and p elements. You can clearly see that even despite defining only h1 and p, the webpage `/articles/node-fundamentals` has a page title menu and the footer. That's due to the inheritance, extends and `layout.pug`. +This is the awesomeness which we receive for free thanks to Twitter Bootstrap and `h1` and `p` elements. You can clearly see that even despite defining only `h1` and `p`, the webpage `/articles/node-fundamentals` has a page title menu and the footer. That's due to the inheritance, extends, and `layout.pug`. ![alt](media/image5.png) ***Figure 4-5.** The article page* -Did you notice that log in link? Let's implement the log in page next. +Did you notice that "Log in" link? Let's implement the login page next. ## login.pug -Similarly to `article.pug`, the login page uses `login.pug` which contains... not much! *Only* a form and a button with some minimal Twitter Bootstrap classes/markup. So likewise to `article.pug`, we extend layout and overwrite two blocks. One for the actice menu value and the other for the content, that is the main part of the page. This main part has guess what? A LOGIN FORM! +Similarly to `article.pug`, the login page uses `login.pug`, which contains... not much! *Only* a form and a button with some minimal Twitter Bootstrap classes/markup. + +So as with `article.pug`, we extend layout and overwrite two blocks—one for the active menu value and the other for the content, which is the main part of the page. This main part has guess what? A LOGIN FORM! This is file `login.pug`: ```pug extends layout @@ -1473,17 +1477,17 @@ block content button.btn.btn-lg.btn-primary.btn-block(type="submit") Log in ``` -Again, thanks to Twitter Bootstrap, our page looks stellar. It has menu because of the `extends` and `layout.pug`. Figure 4-6 shows how the login page looks. +Again, thanks to Twitter Bootstrap, our page looks stellar. It has a menu because of `extends` and `layout.pug`. Figure 4-6 shows how the login page looks. ![alt](media/image6.png) ***Figure 4-6.** The login page* -But how to create a new article? By posting it's title and text. +But how to create a new article? Easy! By posting its title and text. ## post.pug -The post page (Figure 4-7) has another form and it also extends `layout.pug`. This time, the form contains a text area element which will become the main text of the article. In addition to the text, there are title and the URL segment (or path) which is called slug 🐌. +The post page (Figure 4-7) has another form and it also extends `layout.pug`. This time, the form contains a text area element that will become the main text of the article. In addition to the article text, there are `title`, and the URL segment (or path) to the article which is called `slug` 🐌. ```pug extends layout @@ -1509,19 +1513,19 @@ block content button.btn.btn-primary(type="submit") Save ``` -To give you some visual of the Pug of `post.pug`, take a look at the page for posting new articles. The action attribute of `
` will allow browsers send the data to the back-end and then Express will take care of it by processing and our Node code will save it to the database. +To give you some visual of the Pug of `post.pug`, take a look at the page for posting new articles. The action attribute of `` will allow browsers to send the data to the backend and then Express will take care of it by processing, and our Node code will save it to the database. ![alt](media/image7.png) ***Figure 4-7.** The post page* -If a valid administrator user is logged in, then we want to show an admin interface. See the Admin link in the menu? Let's implement the admin page to which this link leads. +If a valid administrator user is logged in, then we want to show an admin interface. See the Admin link in the menu? Let's implement the admin page to which this menu link leads to. ## admin.pug -The admin page (Figure 4-8) has a loop of articles just like the home page but in addition to just showing articles, we can include a front-end script (`js/admin.js`) specific to this page. This script will do some AJAX-y calls to publish and unpublish articles. These functions will be available only to admins. Of course we will need an server-side validation on the backend later. Don't trust only the front-end validation or authorization! +The admin page (Figure 4-8) has a loop of articles just like the home page, but in addition to just showing articles, we can include a front-end script (`js/admin.js`) specific to this page. This script will do some AJAX-y calls to publish and unpublish articles. These functions will be available only to admins. Of course we will need an server-side validation on the backend later. Don't trust only the front-end validation or authorization! -So the `admin.pug` file starts with the layout extension and has content overwrite in which there's a table of articles. In each row of the table, we use `glyphicon` to show a fancy icon pause or play. The icons come from Twitter Bootstrap and enabled via classes. +So the `admin.pug` file starts with the layout extension and has content overwrite, in which there's a table of articles. In each row of the table, we use `glyphicon` to show a fancy icon for pause or play ▶️. The icons come from Twitter Bootstrap and are enabled via classes: ```pug extends layout @@ -1555,26 +1559,27 @@ block content script(type="text/javascript", src="js/admin.js") ``` -Please notice that, we use ES6 string template (or interpolation) to print article IDs as attributes `data-id` (indentation was removed): +Please notice that we use ES6 string template (or interpolation) to print article ids as attributes `data-id` (indentation was removed): ```pug tr(data-id=`${article._id}`, class=(!article.published) ? 'unpublished':'') ``` -And, a conditional (ternary) operator () is used for classes and title attributes. Remember, it's JavaScript! (Indentation was removed for better viewing.) +And a conditional (ternary) operator () is used for classes and title attributes. Remember, it's JavaScript! (Indentation has was removed for better viewing.) ```pug span.glyphicon(class=(article.published) ? "glyphicon-pause" : "glyphicon-play", title=(article.published) ? "Unpublish" : "Publish") ``` -The result is a beautiful admin page (okay, enough with sarcasm and saying Twitter Bootstrap is stellar, pretty or cute. It's not... but compared to standard HTML which puts me to sleep, Twitter Bootstrap style is a HUGE improvement.) It has functionality to publish and unpublish articles. +The result is a beautiful admin page (Okay, enough with sarcasm and saying Twitter Bootstrap is stellar, pretty or cute. It's not... but compared to standard HTML, which puts me to sleep, Twitter Bootstrap style is a HUGE improvement.) It has functionality to publish and unpublish articles. ![alt](media/image8.png) -***Figure 4-8.** The admin page* +***Figure 4-8.** The admin page shows the list of published and draft articles* # Summary -You learned about Pug and Handlebars templates (variables, iterations, condition, partials, unescaping, and so forth), and how to use them in a standalone Node.js script or within Express.js. In addition, the main pages for Blog were created using Pug. +In this chapter, you learned about the Pug and Handlebars templates (variables, iterations, condition, partials, unescaping, and so forth), and how to use them in a standalone Node.js script or within Express.js. In addition, the main pages for Blog were created using Pug. + +In the next chapter, we'll learn how to extract the data from a database and save new data to it. You'll become familiar with MongoDB. Onwards. -In the next chapter we examine an important aspect of modern web development and software engineering: test-driven development. We look at the Mocha module and write some tests for Blog in true TDD/BDD style. In addition, the next chapter deals with adding a database to Blog routes to populate these templates, and shows you how to turn them into working HTML pages! diff --git a/chapter5/chapter5.md b/chapter5/chapter5.md index a0c3932a..6694f58c 100755 --- a/chapter5/chapter5.md +++ b/chapter5/chapter5.md @@ -2,15 +2,19 @@ Chapter 5 --------- # Persistence with MongoDB and Mongoskin -NoSQL databases (DBs), also called _non-relational_ _databases_, are more horizontally scalable, and thus better suited for distributed systems. They are tailared to a specific queries. NoSQL databases deal routinely with larger data sizes than traditional ones. NoSQL databases are are usually open source. +I really like using MongoDB with Node. Many other Node developers would agree with me because this database has JavaScript interface and uses JSON-like data structure. MongoDB belongs to a category of a NoSQL databases. -The key distinction in implementation of apps with NoSQL DBs comes from the fact that NoSQL DBs are schema-less. They are simple stores. In other words, relationships between database entities are not stored in the database itself (no more join queries); they are moved to the application or object-relational mapping (ORM) levels—in our case, to Node.js code. Another good reason to use NoSQL databases is that, because they are schema-less, they are perfect for prototyping and Agile iterations (more pushes!). +NoSQL databases (DBs), also called _non-relational_ _databases_, are more horizontally scalable, and better suited for distributed systems than traditional SQL ones (a.k.a., RDMBS). NoSQL DBs built in a way that they allow data duplication and can be well tailored to specific queries. This process is called denormalization. In short, NoSQL comes to help when RDMBS can't scale. It's often the case that NoSQL databases deal routinely with larger data sizes than traditional ones. -MongoDB is a document store NoSQL database (as opposed to key value and wide-column store NoSQL databases, [http://nosql-database.org](http://nosql-database.org)). It's the most mature and dependable NoSQL database available thus far. In addition to efficiency, scalability, and lightning speed, MongoDB uses JavaScript–like language for its interface! This alone is magical, because now there's no need to switch context between the front end (browser JavaScript), back end (Node.js), and database (MongoDB). +The key distinction in implementation of apps with NoSQL DBs comes from the fact that NoSQL DBs are schema-less. There's no table, just a simple store indexed by IDs. A lot of data types are not stored in the database itself (no more `ALTER TABLE` queries); they are moved to the application or object-relational mapping (ORM) levels—in our case, to Node.js code. Another good reason to use NoSQL databases is that, because they are schema-less. For me this is the best advantage of NoSQL. I can quickly prototype prototyping and iterate (more git pushes!). Once I am more or less done, or think I am done, I can implement schema and validation in Node. This workflow allows me to not waste time early in the project lifecycle while still having the security at a more mature stage. -The company behind MongoDB is an industry leader and provides education and certification through its online MongoDB University ([https://university.mongodb.com](https://university.mongodb.com)). +MongoDB is a document store NoSQL database (as opposed to key value and wide-column store NoSQL databases, [http://nosql-database.org](http://nosql-database.org)). It's the most mature and dependable NoSQL database available thus far. I know that some people just hate MongoDB for its bugs but when I ask them if there's a better alternative they can't name anything. Interestingly, some traditional databases added NoSQL field type which allows them to rip the benefits of flexibility before available only to NoSQl databases. -To get you started with MongoDB and Node.js, we examine the following in this chapter: +In addition to efficiency, scalability, and lightning speed, MongoDB has a JavaScript interface! This alone is magical, because now there's no need to switch context between the front end (browser JavaScript), back end (Node.js), and database (MongoDB). This is my favorite feature because in 90% of my projects I don't handle that my data or traffic, but I used the JavaScript interface all the time. + +The company behind MongoDB is an industry leader, and provides education and certification through its online MongoDB University ([https://university.mongodb.com](https://university.mongodb.com)). I once was invited by Mongo to interview for a Director of Software Engineering, but declined to continue after first few rounds. Well, that's a topic for a different book. + +To get you started with MongoDB and Node.js, I'll show the following in this chapter: - Easy and proper installation of MongoDB - How to run the Mongo server @@ -18,33 +22,33 @@ To get you started with MongoDB and Node.js, we examine the following in this ch - MongoDB shell in detail - Minimalistic native MongoDB driver for Node.js example - Main Mongoskin methods -- Project: storing Blog data in MongoDB with Mongoskin +- Project: Storing Blog data in MongoDB with Mongoskin ## Easy and Proper Installation of MongoDB -Next, we look at MongoDB installation from the official package, as well as using HomeBrew for macOS users (recommended). +Next, I'll show the MongoDB installation from the official package, as well as using HomeBrew for macOS users (recommended). -The following steps are better suited for macOS/Linux–based systems, but with some modifications they can be used for Windows systems as well (i.e., `$PATH` variable, or the slashes). For non-macOS users, there are [many other ways to install](http://docs.mongodb.org/manual/installation) (). +The following steps are better suited for macOS/Linux–based systems, but with some modifications they can be used for Windows systems as well, i.e., modify the `$PATH` variable, and the slashes. For more instructions for non-macOS/Linux users, go and check [many other ways to install Mongo](http://docs.mongodb.org/manual/installation) (). -The HomeBrew installation is recommended and is the easiest path (assuming macOS users have `brew` installed already, which was covered in Chapter 1): +I'll continue with the installation for macOS users. The HomeBrew installation is recommended and is the easiest path (assuming macOS users have `brew` installed already, which was covered in Chapter 1): ``` $ brew install mongodb ``` -If this doesn't work, try the manual path described later. One of them is to download an archive file for MongoDB at . For the latest Apple laptops, such as MacBook Air, select the OS X 64-bit version. The owners of older Macs should browse the link . The owners of other laptops and OSs, select the appropriate package for the download. +If this doesn't work, try the manual installation. It's basically downloading an archive file for MongoDB at and then configuring it. For the latest Apple laptops, such as MacBook Air, select the OS X 64-bit version. The owners of older Macs should browse the link . The owners of other laptops and OSs, select the appropriate package for the download. **Tip** If you don't know the architecture type of your processor when choosing a MongoDB package, type `$ uname -p` in the command line to find this information. After the download, unpack the package into your web development folder or any other as long as you remember it. For example, my development folder is `~/Documents/Code` (`~` means home). If you want, you could install MongoDB into the `/usr/local/mongodb` folder. -_Optional:_ If you would like to access MongoDB commands from anywhere on your system, you need to add your `mongodb` path to the `$PATH` variable. For macOS, you need the open-system `paths` file which is located at `/etc/paths` with: +_Optional:_ If you would like to access MongoDB commands from anywhere on your system, you need to add your `mongodb` path to the `$PATH` variable. For macOS, you need the open-system `paths` file, which is located at `/etc/paths` with: ``` $ sudo vi /etc/paths ``` -Or, if you prefer VS Code and have the `code` shell command installed: +Or, if you prefer VS Code and have the `code` shell command installed, use this VS Code command: ``` $ code /etc/paths @@ -63,60 +67,61 @@ $ sudo mkdir -p /data/db $ sudo chown `id -u` /data/db ``` -This data folder is where your local database instance will store all databases, documents, etc. - all data. The figure below shows how I create my data folder in `/data/db` (root, then data then db), and changed ownership of the folder to my user instead of it being a root or whatever it was before. (Science proved that not having folders owned by root, reduces the number of permission denied errors by 100%.) +This data folder is where your local database instance will store all databases, documents, and so on-all data. The figure 5-1 below shows how I created my data folder in `/data/db` (root, then `data` then `db`), and changed ownership of the folder to my user instead of it being a root or whatever it was before. Science proved that not having folders owned by root, reduces the number of permission denied errors by 100%. Figure 5-1 shows how this looks onscreen. -Figure 5-1 shows how this looks onscreen. ![alt](media/image1.png) ***Figure 5-1.** Initial setup for MongoDB: create the data directory* -If you prefer to store data somewhere else rather than `/data/db`, then you can do it. Just specify your custom path using the `--dbpath` option to `mongod` (main MongoDB service) when you launch your database instance (server). +If you prefer to store data somewhere else rather than `/data/db`, then you can do it. Just specify your custom path using the `--dbpath` option to `mongod` (the main MongoDB service) when you launch your database instance (server). -If some of the steps weren't enough, then here's the another interpretation of the installation instructions for MongoDB on various OSs are available at MongoDB.org, "[Install MongoDB on OS X](http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/)"(http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/). For Windows users, there is a good walk-through article titled "[Installing MongoDB](http://www.tuanleaded.com/blog/2011/10/installing-mongodb)"(http://www.tuanleaded.com/blog/2011/10/installing-mongodb). +If some of these steps weren't enough, then another interpretation of the installation instructions for MongoDB on various OSs is available at MongoDB.org, "[Install MongoDB on OS X](http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x)" (). Windows users can read a good walk-through article titled "[Installing MongoDB](http://www.tuanleaded.com/blog/2011/10/installing-mongodb)" (). # How to Run the Mongo Server -To run the Mongo server (a.k.a. DB instance, service or daemon), there's `mongod` command. If you installed in manually and didn't link the location to PATH, then go to the folder where you unpacked MongoDB. That location should have a `bin` folder in it. From that folder, type the following command: +To run the Mongo server (a.k.a. DB instance, service, or daemon), there's the `mongod` command. It's not `mongodb` or `mongo`. It's `mongod`. Remember the "d". It's stands for daemon. + +If you installed in manually and didn't link the location to PATH, then go to the folder where you unpacked MongoDB. That location should have a `bin` folder in it. From that folder, type the following command: ``` $ ./bin/mongod ``` -If you like most normal developers, pefer to always navigate to a DB folder and just type `mongod` anywhere on your computer, I assume you exposed that DB folder in your PATH variable. So if you added `$PATH` for the MongoDB location, type the following *anywhere you like*: +If you are like most normal developers, and prefer to type `mongod` anywhere on your computer, I assume you exposed the MongoDB `bin` folder in your `PATH` environment variable. So if you added `$PATH` for the MongoDB location, type the following *anywhere you like*: ``` $ mongod ``` -**Note**: Oh, yeah. Don't forget to restart the terminal window after adding a new path to the `$PATH` variable (Figure 5-2). That's just how terminal apps work. They might not pick up your newest PATH value until you restart them. +**Note** Oh, yeah. Don't forget to restart the terminal window after adding a new path to the `$PATH` variable (Figure 5-2). That's just how terminal apps work. They might not pick up your newest PATH value until you restart them. ![alt](media/image2.png) -***Figure 5-2.** Starting up the MongoDB server* +***Figure 5-2.** Successful starting of the MongoDB server outputs "waiting for connections on port 27017"* -There are tons of info on the screen. If you can find and see something saying about 'waiting' then you are all set, a message like this: +There's tons of info on the screen after `mongod`. If you can find something saying about "waiting" and "port 27017", then you are all set. Look for a message this: ``` -MongoDB starting: pid =7218 port=27017... -... waiting for connections on port 27017 ``` -That text means the MongoDB database server is running. Congrats! By default, it's listening at . This is the host and port for the scripts and applications to access MongoDB. In our Node.js code, we use 27017 for for the database and port 3000 for the server. +That text means the MongoDB database server is running. Congrats! + +By default, it's listening at . This is the host and port for the scripts and applications to access MongoDB. In our Node.js code, we use 27017 for for the database and port 3000 for the server. If you see anything else, then you probably have one of the two: -* The data or db folders are not create or create with root permissions. Solution is to create it with non-root. -* The MongoDB folder is not exposed and `mongod` cannot be found. Solution is to use the correct location or expose the location in PATH. +* The `data` or `db` folders are not created or were created with root permissions. The solution is to create them with non-root. +* The MongoDB folder is not exposed, and `mongod` cannot be found. The solution is to use the correct location or expose the location in PATH. -Please fix the issue or if you are all set with the 'waiting' notice, the let's go and play with the database using Mongo Console. +Please fix the issue(s) if you have any. If you are all set with the "waiting" notice, the let's go and play with the database using Mongo Console. # Data Manipulation from the Mongo Console Akin to the Node.js REPL, MongoDB has a console/shell that acts as a client to the database server instance. This means that we have to keep the terminal window with the server open and running while using the console in a different window/tab. -From the folder where you unpacked the archive, launch the `mongod` service with the command pointing to the bin folder: +From the folder where you unpacked the archive, launch the `mongod` service with the command pointing to the `bin` folder: ``` $ ./bin/mongod @@ -128,11 +133,11 @@ Or, if you installed MongoDB globally (recommended), launch the `mongod` service $ mongod ``` -You should be able to see information in your terminal saying 'waiting for connections on 27-17'. +You should be able to see information in your terminal saying "waiting for connections on 27017". -Now, we will launch a separate process or an application if you will. It's called the MongoDB console or shell and it allows developers to connect to the database instance and perform pretty much anything they want: create new documents, update them and delete. In other words, Mongo console is a client. Its benefit is that it comes with MongoDB and does NOT require anything fancy or complex. It work in the terminal which mean you can use it on almost any OS (yes, even on Windows). +Now, we will launch a separate process or an application, if you will. It's called the MongoDB console or shell, and it allows developers to connect to the database instance and perform pretty much anything they want: create new documents, update them, and delete. In other words, Mongo console is a client. Its benefit is that it comes with MongoDB and does NOT require anything fancy or complex. It works in the terminal, which means you can use it on almost any OS (yes, even on Windows). -The name of the command is `mongo`. Execute this command in a *new* terminal window (_important!_). Again, if you didn't expose your MongoDB to PATH, then, in the same folder in which you have MongoDB, type the `mongo` command with path to this `mongo` file which is in the `bin` of the MongoDB installation. Open another terminal window in the same folder and execute: +The name of the command is `mongo`. Execute this command in a *new* terminal window (_important!_). Again, if you didn't expose your MongoDB to `PATH`, then in the same folder in which you have MongoDB, type the `mongo` command with path to this `mongo` file, which is in the `bin` of the MongoDB installation. Open another terminal window in the same folder and execute: ``` $ ./bin/mongo @@ -140,42 +145,42 @@ $ ./bin/mongo -Or, if you have `mongo` "globally" by exposing the MongoDB's bin into PATH, simply type from any folder (you don't have to be in the MongoDB folder or specify bin since you already have that path in your PATH environment variable): +Or, if you have `mongo` "globally" by exposing the MongoDB's `bin` into `PATH`, simply type from any folder (you don't have to be in the MongoDB folder or specify bin since you already have that path in your PATH environment variable): ``` $ mongo ``` -When you successfully connect to the database instance, then you should see something like this. Of course the exact version will be depending on your version of the MongoDB shell. +When you successfully connect to the database instance, then you should see something like this. Of course, the exact version will depend on your version of the MongoDB shell. My Mongo shell is 2.0.6: ``` MongoDB shell version: 2.0.6 connecting to: test ``` -Did you notice the cursor change? It's now `>` as shown in Figure 5-3. It mean you are in a different environment than bash or zsh (which I use). You cannot execute shell command anymore so don't try to use `node server.js`, or `mkdir my-awesome-pony-project`. It won't work. But what will work is JavaScript, Node.js and some special MongoDB code. For example, type and execute the following two commands to save a document `{a: 1}` (super creative, I know, thanks) and then query the collection to see the newly created document there. +Did you notice the cursor change? It's now `>`, as shown in Figure 5-3. It mean you are in a different environment than bash or zsh (which I use). You cannot execute shell command anymore, so don't try to use `node server.js` or `mkdir my-awesome-pony-project`. It won't work. But what will work is JavaScript, Node.js, and some special MongoDB code. For example, type and execute the following two commands to save a document `{a: 1}` (super creative, I know, thanks) and then query the collection to see the newly created document there: ``` > db.test.save( { a: 1 } ) > db.test.find() ``` -Figure 5-3 shows this. If you see that your record is being saved, then everything went well. Commands `find` and `save` do exactly what you might think they do. ;-) +Figure 5-3 shows that I saved my record `{a:1}`. Everything went well. The commands `find()` and `save()` do exactly what you might think they do ;-), only you need to prefix them with `db.COLLECTION_NAME` where you substitute `COLLECTION_NAME` for your own name. ![alt](media/image3.png) -***Figure 5-3.** Running MongoDB client and storing sample data* +***Figure 5-3.** Running the MongoDB shell/console client and executing queries in the test collection* -**Note** On macOS (and most Unix systems), to close the process, use `control + c`. If you use `control + z`, it puts the process to sleep (or detaches the terminal window). In this case, you might end up with a lock on data files and then have to use the "kill" command (e.g., `$ killall node`) or Activity Monitor and delete the locked files in the data folder manually. For a vanilla macOS terminal, `command +` . is an alternative to `control + c`. +**Note** On macOS (and most Unix systems), to close the process, use control+C. If you use control+Z, it puts the process to sleep (or detaches the terminal window). In this case, you might end up with a lock on data files and then have to use the "kill" command (e.g., `$ killall node`) or Activity Monitor and delete the locked files in the data folder manually. For a vanilla macOS terminal, command+. is an alternative to control+C. -What are some other MongoDB console command which developers like you and me can use? We will study the most important of them next. +What are some other MongoDB console commands that seasoned Node developers like you and I can use? We will study the most important of them next. # MongoDB Console in Detail -MongoDB console syntax is JavaScript. That's wonderful. The last thing we want is to learn a new complex language like SQL. However, MongoDB console methods are not without their quirks. For example, `db.test.find()` has a class name db, then my collection name test and then a method name find. In other words, it's a mix of arbitrary (custom) and mandatory (fixed) names. That's unusual. +MongoDB console syntax is JavaScript. That's wonderful. The last thing we want is to learn a new complex language like SQL. However, MongoDB console methods are not without their quirks. For example, `db.test.find()` has a class name `db`, then my collection name `test`, and then a method name `find()`. In other words, it's a mix of arbitrary (custom) and mandatory (fixed) names. That's unusual. -Let's take a look at the list of the most useful MongoDB console (shell) commands are listed here: +Let's take a look at the most useful MongoDB console (shell) commands, which I listed here: - `> help`: prints a list of available commands - `> show dbs`: prints the names of the databases on the database server to which the console is connected (by default, localhost:27017; but, if we pass params to `mongo`, we can connect to any remote instance) @@ -189,7 +194,7 @@ Let's take a look at the list of the most useful MongoDB console (shell) command - `> db.collection_name.remove(query)`; removes all items from `collection_name` that match `query` criteria - `> printjson(document);`: prints the variable `document` -It's possible to use good old JavaScript, for example, storing document in variable is as easy as using an equal sign. `printjson()` is a utility method which outputs the value of a variable. +It's possible to use good old JavaScript. For example, storing a document in a variable is as easy as using an equal sign `=`. Then, `printjson()` is a utility method that outputs the value of a variable. The following code will read one document, add a field `text` to it, print and save the document: ``` > var a = db.messages.findOne() @@ -199,37 +204,38 @@ It's possible to use good old JavaScript, for example, storing document in v > db.messages.save(a) ``` -`save()` works two ways. If you have `_id` which is a unique MongoDB ID, then the document will be updated with whatever new properties where passed to the `save()` method. That's the example above in which we create a new property `text` and assigned a value of `hi` to it. +`save()` works two ways. If you have `_id`, which is a unique MongoDB ID, then the document will be updated with whatever new properties were passed to the `save()` method. That's the previous example in which I create a new property `text` and assigned a value of `hi` to it. -When there's not `_id`, then MongoDB console will insert a new document and create a new Object ID in `_id`. That's the very first example where we used `db.test.save({a:1})`. To sum up, `save()` work like an upsert (update or insert). +When there's no `_id`, then MongoDB console will insert a new document and create a new document ID (`ObjectId`) in `_id`. That's the very first example where we used `db.test.save({a:1})`. To sum up, `save()` works like an upsert (update or insert). -For the purpose of saving time, the API listed here is the bare minimum to get by with MongoDB in this book and its projects. The real interface is richer and has more features. For example, `update` accepts options such as `multi: true`, and it's not mentioned here. A full overview of the MongoDB interactive shell is available at mongodb.org, "[Overview—The MongoDB Interactive Shell](http://www.mongodb.org/display/DOCS/Overview+-+The+MongoDB+Interactive+Shell)"(http://www.mongodb.org/display/DOCS/Overview+-+The+MongoDB+Interactive+Shell). +For the purpose of saving time, the API listed here is the bare minimum to get by with MongoDB in this book and its projects. The real interface is richer and has more features. For example, `update` accepts options such as `multi: true`, and it's not mentioned here. A full overview of the MongoDB interactive shell is available at mongodb.org: "[Overview—The MongoDB Interactive Shell](http://www.mongodb.org/display/DOCS/Overview+-+The+MongoDB+Interactive+Shell)" (). -I'm sure you all enjoyed typing those bracets and parenthesis in the terminal just to get a typo somewhere. That's why I create MongoUI which is a web-based database admin interface. It allows you to view, edit, search, remove MongoDB documents without typing commands. Check out MongoUI at . You can install MongoUI with npm by executing `nmp i -g mongoui` and then start it with `mongoui`. It'll open the app in your default browser and connect to your local db instance (if there's one). +I'm sure you all enjoyed typing those brackets and parentheses in the terminal just to get a typo somewhere (#sarcasm). That's why I created MongoUI, which is a web-based database admin interface. It allows you to view, edit, search, remove MongoDB documents without typing commands. Check out MongoUI at . You can install MongoUI with npm by executing `nmp i -g mongoui` and then start it with `mongoui`. It'll open the app in your default browser and connect to your local DB instance (if there's one). -One more useful MongoDB command (script) is `mongoimport`. It allows developers to supply a JSON file which will be imported to a database. Let's say you are migrating a database or have some initial data which you want to use but the database is empty right now. How do you create multiple records? You can copypasta to MongoDB console but that's not fun. Use `mongoimport`. Here's an example how to inject a data from a JSON file with an array of object: +MongoUI is a web-based app which you can host on your own application. For an even better desktop tool than my own MongoUI, download Compass at TK. It's built in Node using Electron and React. -``` +One more useful MongoDB command (script) is `mongoimport`. It allows developers to supply a JSON file that will be imported to a database. Let's say you are migrating a database or have some initial data that you want to use, but the database is empty right now. How do you create multiple records? You can copypasta to MongoDB console, but that's not fun. Use `mongoimport`. Here's an example of how to inject a data from a JSON file with an array of object: +``` mongoimport --db dbName --collection collectionName --file fileName.json --jsonArray ``` -You don't need to do anything extra to install mongoimport. It's already part of the MongoDB installation and lives in the same folder as `mongod` or `mongo`, i.e., `bin`. And JSON is not the only format which mongoimport takes. It can be CSV, or TSV as well. Isn't it neat? +You don't need to do anything extra to install `mongoimport`. It's already part of the MongoDB installation and lives in the same folder as `mongod` or `mongo`, i.e., `bin`. And JSON is not the only format that `mongoimport` takes. It can be CSV, or TSV as well. Isn't it neat? 😇 -Connecting and working with a database directly is a super power. You can debug or seed the data without the need of writing any Node code. But sooner or later, you'll want to automate the work with the database. Node is great for that. To be able to work with MongoDB from Node, we need a driver. +Connecting and working with a database directly is a superpower. You can debug or seed the data without the need for writing any Node code. But sooner or later, you'll want to automate the work with the database. Node is great for that. To be able to work with MongoDB from Node, we need a driver. # Minimalistic Native MongoDB Driver for Node.js Example -To illustrate the advantages of Mongoskin, let's use [Node.js native driver for MongoDB](https://github.com/christkv/node-mongodb-native) () first. We need to write a basic script that accesses the database. +To illustrate the advantages of Mongoskin, I will show how to use the [Node.js native driver for MongoDB](https://github.com/christkv/node-mongodb-native) () which is somewhat more work than to use Mongoskin. I create a basic script that accesses the database. -First, however, let's install the MongoDB native driver for Node.js with SE to save the exact version in a dependency (must create `package.json` first): +Firstly, create `package.json` with `npm init -y`. Then, install the MongoDB native driver for Node.js with `SE` to save the exact version as a dependency: ``` $ npm install mongodb@2.2.33 -SE ``` -This is an example of a good `package.json` file with the driver dependency listed in there. It's from `code/ch5/mongodb-examples`. There are two more packages which you can ignore for now. One of them is validating code formatting (`standar`) and another is an advanced MongoDB library (`mongoskin`): +This is an example of a good `package.json` file with the driver dependency listed in there. It's from `code/ch5/mongodb-examples`. There are two more packages. You can ignore them for now. One of them is validating code formatting (`standard`) and another is an advanced MongoDB library (`mongoskin`): ```js { @@ -253,7 +259,7 @@ This is an example of a good `package.json` file with the driver dependency list } ``` -It's always a good learning to start from something small and then build the skills gradually. Thus, let's study a small example which tests whether we can connect to a local MongoDB instance from a Node.js script and run a sequence of statements analogous to the previous section: +It's a good learning approach to start from something small and then build skills gradually. For this reason let's study a small example that tests whether we can connect to a local MongoDB instance from a Node.js script and run a sequence of statements analogous to the previous section: 1. Declare dependencies 2. Define the database host and port @@ -261,8 +267,8 @@ It's always a good learning to start from something small and then build the ski 4. Create a database document 5. Output a newly created document/object -The file name for this short script is `code/ch5/mongo-native-insert.js`. We start this file with some imports. Then we will connect to the database using host and port -This is one of the ways to establish a connection to the MongoDB server, in which the `db` variable holds a reference to the database at a specified host and port: +The file name for this short script is `code/ch5/mongo-native-insert.js`. We'll start this file with some imports. Then we will connect to the database using host and port. +This is one of the ways to establish a connection to the MongoDB server in which the `db` variable holds a reference to the database at a specified host and port: ```js const mongo = require('mongodb') @@ -283,7 +289,7 @@ db.open((error, dbConnection) => { }) ``` -For example, to create a document in MongoDB, we can use the `insert()` method. Unlike Mongo console, this `insert()` is asynchronous which means it won't execute immediately. The results will be coming later. That's why there's a callback. The callback has error as its first argument. It's called error-first pattern. The result which is the newly created document is the second argument of the callback. In the console, we don't really have multiple clients executing queries so in the console methods are synchronous. Situation is different in Node because we want to process multiple clients while we wait for the database to respond. +For example, to create a document in MongoDB, we can use the `insert()` method. Unlike Mongo console, this `insert()` is *asynchronous* which means it won't execute immediately. The results will be coming later. That's why there's a callback. The callback has `error` as its first argument. It's called error-first pattern. The result that is the newly created document is the second argument of the callback. In the console, we don't really have multiple clients executing queries so in the console methods are synchronous. The situation is different in Node because we want to process multiple clients while we wait for the database to respond. It's important to handle the error by checking for it and then exiting with an error code of 1: @@ -301,7 +307,7 @@ dbConnection }) ``` -Here is the entire code to accomplish these five steps. The most important thing to observe and remember is that ENTIRE working code of `insert()` is **inside** of the `open()` callback. This is because `open()` is asynchronous which in turn is because `dbConnection` becomes available with a delay and we don't want to block the Node's event loop waiting for the `dbConnection`. The full source code of this script is in the `mongo-native-insert.js` file and below for convenience in case you don't have the GitHub open right now: +Here is the entire code to accomplish these five steps. The most important thing to observe and remember is that *ENTIRE* working code of `insert()` is **inside** of the `open()` callback. This is because `open()` is asynchronous, which in turn is because `dbConnection` becomes available with a delay and we don't want to block the Node's event loop waiting for the `dbConnection`. The full source code of this script is in the `mongo-native-insert.js` file and included next for convenience in case you don't have the GitHub open right now: ```javascript const mongo = require('mongodb') @@ -339,15 +345,15 @@ db.open((error, dbConnection) => { ``` -Now we can build a few more methds. For example, another `mongo-native.js` script looks up any object and modifies it: +Now we can build a few more methods. For example, another `mongo-native.js` script looks up any object and modifies it: -1. Get one item from the `message` collection. -2. Print it. -3. Add a property text with the value `hi`. -4. Save the item back to the `message` collection. +1. Get one item from the `message` collection +2. Print it +3. Add a property text with the value `hi` +4. Save the item back to the `message` collection -After we install the library, we can include the MongoDB library in our `mongo-native.js` file as well as create host and port values. +After we install the library, we can include the MongoDB library in our `mongo-native.js` file as well as create host and port values: ```js const mongo = require('mongodb') @@ -357,7 +363,7 @@ const {Db, Server} = mongo const db = new Db('local', new Server(dbHost, dbPort), {safe: true}) ``` -Next open a connection. It's always a good practice to check for any errors and exit gracefully. +Next open a connection. It's always a good practice to check for any errors and exit gracefully: ```javascript db.open((error, dbConnection) => { @@ -368,7 +374,7 @@ db.open((error, dbConnection) => { console.log('db state: ', db._state) ``` -Now, we can proceed to the first step mentioned earlier—getting one item from the `message` collection. The first argument to `findOne()` is a search or query criteria. It works as a logical AND meaning the properties passed to `findOne()` will be matched against the documents in the database. The returned document will be in the callback's argument. This document is in the `item` variable. +Now, we can proceed to the first step mentioned earlier—getting one item from the `message` collection. The first argument to `findOne()` is a search or query criteria. It works as a logical AND, meaning the properties passed to `findOne()` will be matched against the documents in the database. The returned document will be in the callback's argument. This document is in the `item` variable. The variable name doesn't matter that much. What matters is the order of an argument in the callback function. Ergo, **first argument is always an error object even when it's null. The second is the result of a method.** This is true for almost all MongoDB native driver methods but not for every Node library. Node developers need to read the documentation for a particular library to see what arguments are provided to a callback. But in the case of MongoDB native drive, error and result is the convention to remember and use. @@ -385,9 +391,9 @@ The second step, print the value, is as follows: ```javascript console.info('findOne: ', item) ``` -As you can see, methods in the console and Node.js are not much different except that in Node developers *must use callbacks*. +As you can see, methods in the console and Node.js are not much different except that in Node, developers *must use callbacks*. -So let's proceed to the remaining two steps: adding a new property and saving the document. `save()` works like an upsert: if a valid `_id` is provided, then the documents will be updated, if not then the new documents will be created. +Next let's proceed to the remaining two steps: adding a new property and saving the document. `save()` works like an upsert: if a valid `_id` is provided, then the documents will be updated; if not, then the new documents will be created: ```javascript item.text = 'hi' @@ -403,7 +409,7 @@ So let's proceed to the remaining two steps: adding a new property and savin console.info('save: ', document) ``` -To double-check the saved object, we use the ObjectID that we saved before in a string format (in a variable `id`) with the `find` method. This method returns a cursor, so we apply `toArray()` to extract the standard JavaScript array: +To convert a string into the `ObjectId` type, use `mongo.ObjectID()` method. To double-check the saved object, we use the document ID that we saved before in a string format (in a variable `id`) with the `find()` method. This method returns a cursor, so we apply `toArray()` to extract the standard JavaScript array: ```javascript dbConnection.collection('messages') @@ -423,18 +429,20 @@ To double-check the saved object, we use the ObjectID that we saved before in a }) ``` -The full source code of this script is available in the `mongo-native-insert.js` and `mongo-native.js` files. If we run them with `$ node mongo-native-insert` and, respectively, `$ node mongo-native`, while running the `mongod` service the scripts should output something similar to the results in Figure 5-4. There are three documents. The first is without the property text; the second and third documents include it. +The full source code of this script is available in the `mongo-native-insert.js` and `mongo-native.js` files. If we run them with `$ node mongo-native-insert` and, respectively, `$ node mongo-native`, while running the `mongod` service, the scripts should output something similar to the results in Figure 5-4. There are three documents. The first is without the property text; the second and third documents include it. ![alt](media/image4.png) ***Figure 5-4.** Running a simple MongoDB script with a native driver* -Majority of readers will be good with the methods studied here since these methods provide all the CRUD functionality (create, read, update and delete). But for more advanced developers, the full documentation of this library is available at and on the MongoDB web site. +From teaching dozens of MongoDB workshops, I can be sure that the majority of readers will be good with the methods studied here since these methods provide all the CRUD functionality (create, read, update, and delete). But for more advanced developers, the full documentation of this library is available at and on the MongoDB website. # Main Mongoskin Methods -Meet Mongoskin (don't confuse with Redskin). Mongoskin provides a better API than the native MongoDB driver. To illustrate this, compare this code with the example written using native MongoDB driver for Node.js. As always, to install a module, run npm with install—for example, +Meet Mongoskin (don't confuse with DC's Redskins). It provides a better API than the native MongoDB driver. To illustrate this, compare the following Mongoskin implementation with the example in prior section, which written using native MongoDB driver for Node.js. + +As always, to install a module, run npm with install: ``` $ npm i mongoskin@2.1.0 -SE @@ -450,11 +458,11 @@ const dbPort = 27017 const db = mongoskin.db(`mongodb://${dbHost}:${dbPort}/local`) ``` -As you can see the Mongoskin method to connect to the database does not require you to put all the rest of the code in the callback. That is because there's library which will buffer up the upcoming queries and execute that when the connection is ready. +As you can see, the Mongoskin method to connect to the database does *not* require you to put all the rest of the code in the callback. That's because Mongoskin buffers up the upcoming queries and execute them when the connection is ready. I like not having to put all of my Node code in one giant callback. -We can also create our own methods on collections. This might be useful when implementing an model-view-controller-like (MVC-like) architecture by incorporating app-specific logic into these custom methods. See how we can create a custom method `findOneAndAddText()` which take some text (duh) and executes two MongoDB methods to first find that document and then update it in the database with the passed text. Custom methods are your own project-specific methods and they are great at re-using code. +We can also create our own methods on collections. This might be useful when implementing an model-view-controller-like (MVC-like) architecture by incorporating app-specific logic into these custom methods. See how we can create a custom method `findOneAndAddText()` that takes some text (duh) and executes two MongoDB methods to first find that document and then update it in the database with the passed text. Custom methods are your own project-specific methods and they are great at reusing code. -Did you notice that there's no fat arrow function for the custom method `findOneAndAddText()`? That's because we need to let Mongoskin to pass the collection to use `this` inside of this method. If we use the fat arrow `()=>{}`, then we can's use `this.findOne` inside of the custom method. +Did you notice that there's no fat arrow function for the custom method `findOneAndAddText()`? That's because we need to let Mongoskin to pass the collection to use `this` inside of this method. If we use the fat arrow `()=>{}`, then we can's use `this.findOne()` inside of the custom method: ```javascript db.bind('messages').bind({ @@ -481,7 +489,7 @@ db.bind('messages').bind({ }) ``` -Last, we call the custom method like any other methods such as `find()` or `save()`. The more we use this custom in our code the more the benefit of the code reuse and this pattern. Important to notice the `toArray()` method for the `find()` because `documents` must be an array. +Last, we call the custom method like any other methods such as `find()` or `save()`. The more we use this custom in our code the more is the benefit of the code reuse and this pattern. It's important to use the `toArray()` method for the `find()` because the result of the query `documents` is more useful as an array. ```javascript db.messages.findOneAndAddText('hi', (count, id) => { @@ -499,27 +507,27 @@ db.messages.findOneAndAddText('hi', (count, id) => { }) ``` -Mongoskin is a subset of the native Node.js MongoDB driver, so most of the methods as you have observed from the latter are available in the former. For example `find()`, `findOne()`, `update()`, `save()` and `remove(). They are from the native MongoDB driver and they are available in the Mongoskin straight up. But there are more methods. Here is the list of the main Mongoskin–only methods: +Mongoskin is a subset of the native Node.js MongoDB driver, so most of the methods, as you have observed from the latter are available in the former. For example, `find()`, `findOne()`, `update()`, `save()`, and `remove()`. They are from the native MongoDB driver and they are available in the Mongoskin straight up. But there are more methods. Here is the list of the main Mongoskin–only methods: -- `findItems(..., callback)`: finds elements and returns an array instead of a cursor -- `findEach(..., callback)`: iterates through each found element -- `findById(id, ..., callback)`: finds by `_id` in a string format -- `updateById(_id, ..., callback)`: updates an element with a matching `_id` -- `removeById(_id, ..., callback)`: removes an element with a matching `_id` +- `findItems(..., callback)`: Finds elements and returns an array instead of a cursor +- `findEach(..., callback)`: Iterates through each found element +- `findById(id, ..., callback)`: Finds by `_id` in a string format +- `updateById(_id, ..., callback)`: Updates an element with a matching `_id` +- `removeById(_id, ..., callback)`: Removes an element with a matching `_id` -Of course there are alternatives. The alternatives to the native MongoDB driver and Mongoskin include but not limited to: +Of course, there are alternatives to Mongoskin and the native MongoDB driver, including but not limited to: -- `mongoose`: an asynchronous JavaScript driver with optional support for modeling -- `mongolia`: a lightweight MongoDB ORM/driver wrapper -- `monk`: a tiny layer that provides simple yet substantial usability improvements for MongoDB use within Node.js +- `mongoose`: An asynchronous JavaScript driver with optional support for modeling (recommended for large apps) +- `mongolia`: A lightweight MongoDB ORM/driver wrapper +- `monk`: A tiny layer that provides simple yet substantial usability improvements for MongoDB use within Node.js -Data validation is super important. Most of the MongoDB libraries will require developers to create their own validation with Mongoose being an exception. Mongoose has a built-in data validation. Thus for data validation at the Express level, these modules are often used: +Data validation is super important. Most of the MongoDB libraries will require developers to create their own validation, with Mongoose being an exception. Mongoose has a built-in data validation. Thus, for data validation at the Express level, these modules are often used: - `node-validator`: validates data - `express-validator`: validates data in Express.js 3/4 -It is time to utilize our skills and build something interesting with MongoDB by enhancing our blog project. +It is time to utilize our skills and build something interesting with MongoDB by enhancing our Blog project. # Project: Storing Blog Data in MongoDB with Mongoskin @@ -529,11 +537,11 @@ Let's now return to our Blog project. I've split this feature of storing 2. Writing Mocha tests 3. Adding persistence -The task numero uno is to populate the database with some test data. +The task numero uno is to populate the database with some test data. (Numero uno is number one in Chinese.) ## Project: Adding MongoDB Seed Data -First of all, it's not much fun to enter data manually each time we test or run an app. So, in accordance with the Agile principles, we can automate this step by creating a Bash seed data script `db/seed.sh`: +First of all, it's not much fun to enter data manually each time we test or run an app. So, in accordance with the Agile principles, we can automate this step by creating a shell seed data script `db/seed.sh`: ``` mongoimport --db blog --collection users --file ./db/users.json –jsonArray @@ -552,7 +560,7 @@ The `users.json` file contains information about authorized users: }] ``` -Here some of the content of he `articles.json` file which has the seed content of the blog posts and testing (please use the file provided in GitHub instead of typing from the book): +Here's some of the content of the `articles.json` file that has the seed content of the blog posts and testing (please use the file provided in GitHub instead of typing from the book): ``` js [ @@ -617,7 +625,7 @@ describe('server', () => { }) ``` -Let's start the implementation with import/require statement (import not in a sense we are using ES6 `import` statement but in a sense that `require()` method imports). +Let's start the implementation with import/require statement (import not in a sense we are using ES6 `import` statement, but in a sense that `require()` method imports): ```js const boot = require('../app').boot @@ -681,15 +689,15 @@ In a new-article page suite, let's test for presentation of the text with `c }) ``` -To make sure that Mocha doesn't quit earlier than `superagent` calls the response callback, we implemented a countertrick. Instead of it, you can use async. The full source code is in the file `tests/index.js` under `ch5` folder. +To make sure that Mocha doesn't quit earlier than `superagent` calls the response callback, we implemented a countertrick. Instead of it, you can use async. The full source code is in the file `tests/index.js` under the `ch5` folder. Running tests with either `$ make test` or `$ mocha test` should fail miserably, but that's expected because we need to implement persistence and then pass data to Pug templates, which we wrote in the previous chapter. ## Project: Adding Persistence -This example builds on the previous chapter, with the chapter 3 having the latest code (chapter 4 code is in ch5). Let's go back to our `ch3` folder, and add the tests, duplicate them, and then start adding statements to the `app.js` file. +This example builds on the previous chapter, with Chapter 3 having the latest code (Chapter 4 code is in `ch5`). Let's go back to our `ch3` folder, and add the tests, duplicate them, and then start adding statements to the `app.js` file. -The full source code of this example is available under `ch5` folder. First, the dependencies inclusions need to be reformatted to utilize Mongoskin: +The full source code of this example is available under `ch5` folder. First, we refactor dependencies importations to utilize Mongoskin: ```js const express = require('express') @@ -706,7 +714,7 @@ const collections = { } ``` -These statements are needed for the Express.js middleware modules to enable logging (`morgan`), error handling (`errorhandler`), parsing of the incoming HTTP request bodies (`body-parser`) and to support clients which do not have all HTTP methods (`method-override`): +These statements are needed for the Express.js middleware modules to enable logging (`morgan`), error handling (`errorhandler`), parsing of the incoming HTTP request bodies (`body-parser`), and to support clients that do not have all HTTP methods (`method-override`): ```js const logger = require('morgan') @@ -715,24 +723,25 @@ const bodyParser = require('body-parser') const methodOverride = require('method-override') ``` -Then, we create of an Express.js instance and assigning the title to use this title in the templates: +Then we create an Express.js instance and assign the title to use this title in the templates: ```js const app = express() app.locals.appTitle = 'blog-express' ``` -Now, we add a middleware that exposes Mongoskin/MongoDB collections in each Express.js route via the `req` object. It's called a decorator pattern. You can learn more about the decorator pattern as well as other Node patterns in my online course [Node Patterns: From Callbacks to Observer](https://node.university/p/node-patterns). The idea is to have `req.collections` in all other subsequent middleware and routes. It's done with this code. And don't forget to call `next()` in the middleware; otherwise, each request will stall. +Now we add a middleware that exposes Mongoskin/MongoDB collections in each Express.js route via the `req` object. It's called a *decorator* pattern. You can learn more about the decorator pattern as well as other Node patterns in my online course [Node Patterns: From Callbacks to Observer](https://node.university/p/node-patterns). The idea is to have `req.collections` in all other subsequent middleware and routes. It's done with the following code. And don't forget to call `next()` in the middleware; otherwise, each request will stall: ```js app.use((req, res, next) => { - if (!collections.articles || !collections.users) return next(new Error('No collections.')) + if (!collections.articles || !collections.users) + return next(new Error('No collections.')) req.collections = collections return next() }) ``` -Next, we define the Express settings. We set up port number and template engine configurations to tell Express what folder to use for templates (views) and what template engine to use to render those templates (pug): +Next, we define the Express settings. We set up port number and template engine configurations to tell Express what folder to use for templates (`views`) and what template engine to use to render those templates (`pug`): ```js app.set('port', process.env.PORT || 3000) @@ -740,7 +749,7 @@ app.set('views', path.join(__dirname, 'views')) app.set('view engine', 'pug') ``` -Now the usual suspects, `app.use()` statement to plug-in the Express middleware modules. The functionality of most of which should be already familiar to you. The functionality includes logging of requests, parsing of JSON input, using Stylus for CSS and serving of static content. I like to remain disciplined and use `path.join()` to construct cross-platform absolute paths out of relative folder names so that there's a guarantee the paths will work on Windows. +Now is the time for the usual suspects functionality of most of which should be already familiar to you: middleware for logging of requests, parsing of JSON input, using Stylus for CSS and serving of static content. Node developers use the `app.use()` statements to plug these middleware modules in the Express apps. I like to remain disciplined and use `path.join()` to construct cross-platform absolute paths out of relative folder names so that there's a guarantee the paths will work on Windows. ```js app.use(logger('dev')) @@ -759,7 +768,7 @@ if (app.get('env') === 'development') { } ``` -The next sections of the `app.js` file deals with the server routes. So, instead of a single catch-all `*` route in the ch3 examples, we have the following GET, and POST routes (that mostly render HTML from Pug templates): +The next section of the `app.js` file deals with the server routes. So, instead of a single catch-all `*` route in the `ch3` examples, we have the following GET and POST routes (that mostly render HTML from Pug templates): ```js app.get('/', routes.index) @@ -772,7 +781,7 @@ app.post('/post', routes.article.postArticle) app.get('/articles/:slug', routes.article.show) ``` -REST API routes are used mostly for the admin page. That's where our fancy AJAX browser JavaScript will need them. They use GET, POST, PUT and DELETE methods and don't render HTML from Pug templates, but instead output JSON: +REST API routes are used mostly for the admin page. That's where our fancy AJAX browser JavaScript will need them. They use GET, POST, PUT, and DELETE methods and don't render HTML from Pug templates, but instead output JSON: ```js app.get('/api/articles', routes.article.list) @@ -781,7 +790,7 @@ app.put('/api/articles/:id', routes.article.edit) app.delete('/api/articles/:id', routes.article.del) ``` -In the end, we have a 404 catch-all route. It's a good practice to account for the cases when users type a wrong URL. If the request makes to this part of the configuration (top to bottom order), we return the "not found" status: +In the end, we have a 404 catch-all route. It's a good practice to account for the cases when users type a wrong URL. If the request makes it to this part of the configuration (top to bottom order), we return the "404: Not found" status: ```js app.all('*', (req, res) => { @@ -789,7 +798,7 @@ app.all('*', (req, res) => { }) ``` -The way we start the server is the same as in Chapter 3 which means we determine if this file is loaded by another file. In this case, we export the server object. If not, then we proceed to launch the server directly with `server.listen()`. +The way we start the server is the same as in Chapter 3, which means we determine whether this file is loaded by another file. In this case, we export the server object. If not, then we proceed to launch the server directly with `server.listen()`. ```js const server = http.createServer(app) @@ -813,9 +822,9 @@ if (require.main === module) { Again, for your convenience, the full source code of `app.js` is under `ch5/blog-example` folder. -We must add `index.js`, `article.js`, and `user.js` files to the `routes` folder, because we need them in `app.js`. The `user.js` file is bare bones for now (we add authentications in Chapter 6). +We must add `index.js`, `article.js`, and `user.js` files to the `routes` folder, because we need them in `app.js`. The `user.js` file is bare bones for now (we'll add authentications in Chapter 6). -The method for the GET `/users` route, which should return a list of existing users (which we implement later) is as follows: +The method for the GET `/users` route, which should return a list of existing users (which we'll implement later), is as follows: ```js exports.list = (req, res, next) => { @@ -863,7 +872,7 @@ exports.show = (req, res, next) => { } ``` -The GET articles API route (used in the admin page), where we fetch all articles with the `find` method and convert the results to an array before sending them back to the requestee: +The GET `/api/articles` API route (used in the admin page), where we fetch all articles with the `find()` method and convert the results to an array before sending them back to the requestee: ```js exports.list = (req, res, next) => { @@ -877,7 +886,7 @@ exports.list = (req, res, next) => { } ``` -The POST article API routes (used in the admin page), where the `insert` method is used to add new articles to the `articles` collection and to send back the result (with `_id` of a newly created item): +The POST `/api/articles` API routes (used in the admin page), where the `insert` method is used to add new articles to the `articles` collection and to send back the result (with `_id` of a newly created item): ```js exports.add = (req, res, next) => { @@ -892,7 +901,7 @@ exports.add = (req, res, next) => { } ``` -The PUT article API route (used on the admin page for publishing), where the `updateById` shorthand method is used to set the article document to the payload of the request (`req.body`). (The same thing can be done with a combination of `update` and `_id` query.) +The PUT `/api/articles/:id` API route (used on the admin page for publishing), where the `updateById` shorthand method is used to set the article document to the payload of the request (`req.body`). (The same thing can be done with a combination of `update` and `_id` query.) ```js exports.edit = (req, res, next) => { @@ -906,7 +915,7 @@ exports.edit = (req, res, next) => { } ``` -The DELETE article API which is used on the admin page for removing articles in which, again, a combination of `remove` and `_id` can be used to achieve similar results: +The DELETE `/api/articles/:id` API which is used on the admin page for removing articles in which, again, a combination of `remove` and `_id` can be used to achieve similar results: ```js exports.del = (req, res, next) => { @@ -918,7 +927,7 @@ exports.del = (req, res, next) => { } ``` -The GET `/article` post page. This page is a blank form and thus requires NO data: +The GET `/post` create a new post page. This page is a blank form and thus requires NO data: ```js exports.post = (req, res, next) => { @@ -926,7 +935,7 @@ exports.post = (req, res, next) => { } ``` -Next, there's the POST article route for the post page form (the route that actually handles the post addition). In this route we check for the non-empty inputs (`req.body`), construct `article` object and inject it into the database via `req.collections.articles` object exposed to us by middleware. Lastly, we render HTML from the `post` template: +Next, there's the POST article route for the post page form (the route that actually handles the post addition). In this route we check for the non-empty inputs (`req.body`), construct the `article` object, and inject it into the database via the `req.collections.articles` object exposed to us by middleware. Lastly, we render HTML from the `post` template: ```js exports.postArticle = (req, res, next) => { @@ -947,7 +956,7 @@ exports.postArticle = (req, res, next) => { } ``` -The `GET admin page` route in which we fetch sorted articles (`{sort: {_id:-1}}`) and manipulate them: +The GET `/admin` page route in which we fetch sorted articles (`{sort: {_id:-1}}`) and manipulate them: ```js exports.admin = (req, res, next) => { @@ -962,7 +971,7 @@ exports.admin = (req, res, next) => { **Note** In real production apps that deal with thousands of records, programmers usually use pagination by fetching only a certain number of items at once (5, 10, 100, and so on). To do this, use the `limit` and `skip` options with the `find` method, e.g., HackHall example: . -This time we won't duplicate the code since it's rather long. So for the full code of `article.js` please refer to the `code/ch5/blog-example/routes`. +This time we won't duplicate the code since it's rather long. So for the full code of `article.js`, please refer to the `code/ch5/blog-example/routes`. From the project section in Chapter 4, we have the `.pug` files under the `views` folder. Lastly, the `package.json` file looks as follows. Please compare your npm scripts and dependencies. @@ -1000,7 +1009,7 @@ From the project section in Chapter 4, we have the `.pug` files under the `views } ``` -For the admin page to function, we need to add some AJAX-iness in the form of the `js/admin.js` file under the `public` folder. (I don't know why I keep calling HTTP requests done with the XHR object, the AJAX calls since AJAX is Asynchronous JavaScript And XML and no one is using XML anymore.) +For the admin page to function, we need to add some AJAX-iness in the form of the `js/admin.js` file under the `public` folder. (I don't know why I keep calling HTTP requests done with the XHR object the AJAX calls, since AJAX is *Asynchronous JavaScript And XML*, and no one is using XML anymore.‍ #shrug) In this file, we use `ajaxSetup` to configure all requests because these configs will be used in many requests. Most importantly, `withCredentials` will send the cookies which is needed for admin authentication. @@ -1042,7 +1051,7 @@ var remove = function (event) { } ``` -Publishing and unpublishing are coupled together, because they both send `PUT` to `/api/articles/:id` but with different payloads (`data`). Then type is of course `PUT`. The data is turned into a string because that is what this method `$.ajax` uses. If we were to use a different library like [axios](https://npmjs.org/axios) or [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) then the actual data format and the syntax of the call to make the request will be different. An interesting feature is coded in the callback. It allows to change the icons depending on the status of a particular article (`data.published`). +Publishing and unpublishing are coupled together, because they both send PUT to `/api/articles/:id` but with different payloads (`data`). Then type is of course PUT. The data is turned into a string because that is what this method `$.ajax` uses. If we were to use a different library like [axios](https://npmjs.org/axios) or [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) then the actual data format and the syntax of the call to make the request would be different. An interesting feature is coded in the callback. It allows to change the icons depending on the status of a particular article (`data.published`). ```js var update = function (event) { @@ -1070,7 +1079,7 @@ var update = function (event) { } ``` -That's not all. Defining functions won't make them work when a user clicks on a button. We need to attach event listeners. We attach event listeners in the `ready` callback to make sure the the `tbody` is in the DOM, otherwise it might be not found. +That's not all. Defining functions won't make them work when a user clicks a button. We need to attach event listeners. We attach event listeners in the `ready` callback to make sure that the `tbody` is in the DOM—otherwise, it might be not found: ```js $(document).ready(function () { @@ -1084,7 +1093,7 @@ The full source code of the front-end `admin.js` file is in `code/ch5/blog-examp ## Running the App -To run the app, simply execute `$ npm start` which will execute `$ node app.js`, but if you want to seed and test it, execute `$ npm run seed` which will execute `$ make db`. To run tests, use `$ npm test` which executes `$ make test`, respectively (Figure 5-5). (There's no difference between running npm script commands or the commands directly.) +To run the app, simply execute `$ npm start`, which will execute `$ node app.js`, but if you want to seed and test it, execute `$ npm run seed`, which will execute `$ make db`. To run tests, use `$ npm test`, which executes `$ make test`, respectively (Figure 5-5). (There's no difference between running npm script commands or the commands directly.) @@ -1092,7 +1101,7 @@ To run the app, simply execute `$ npm start` which will execute `$ node app.js`, ***Figure 5-5.** The results of running Mocha tests* -Oh yeah. Don't forget that `$ mongod` service must be running on the localhost and port 27017. The expected result is that all tests now pass (hurray!), and if users visit , they can see posts and even create new ones on the admin page () as shown in Figure 5-6. +Oh, yeah! Don't forget that `$ mongod` service must be running on the localhost and port 27017. The expected result is that all tests now pass (hurray!), and if users visit , they can see posts and even create new ones on the admin page () as shown in Figure 5-6. ![alt](media/image6.png) @@ -1102,5 +1111,7 @@ Of course, in real life, nobody leaves the admin page open to the public. Theref # Summary -In this chapter we learned how to install MongoDB, and use its console and native Node.js driver, for which we wrote a small script and refactored it to see Mongoskin in action. Last, we wrote tests, seeded scripts, and implemented the persistence layer for Blog. In the next chapter, we'll implement authorization and authentication. +In this chapter, I taught and you've learned how to install MongoDB, and use its console and native Node.js driver, for which we wrote a small script and refactored it to see Mongoskin in action. We also wrote tests, seeded scripts, implemented the persistence layer and the front-end admin page logic for Blog. + +In the next chapter, we'll dive into misty and mysterious world of auth, and implement authorization and authentication for Blog. diff --git a/chapter6/chapter6.md b/chapter6/chapter6.md index d47f48fb..73ed9499 100755 --- a/chapter6/chapter6.md +++ b/chapter6/chapter6.md @@ -1,21 +1,23 @@ Chapter 6 --------- - # Using Sessions and OAuth to Authorize and Authenticate Users in Node.js Apps + # Security and Auth in Node.js -Security is an important aspect of any real-world web application. This is especially true nowadays, because our apps don’t function in silos anymore. We, as developers, can and should leverage numerous third-party services (e.g., Google, Twitter, GitHub) or become service providers ourselves (e.g., provide a public API). +You know that security is an important aspect of any real-world web application. This is especially true nowadays, because our apps don’t function in silos anymore. What if I tell you that you don't have to spend days studying for security certifications or read sketchy dark-web hacker forums to implement a secure Node app? I'll show you a few tricks. -We can makes our apps and communications secure with the usage of various approaches, such as token-based authentication and/or OAuth(). Therefore, in this practical guide, I dedicate the whole chapter to matters of authorization, authentication, OAuth, and best practices. We'll look at the following topics: +We can makes our apps and communications secure by using various approaches, such as token-based authentication and/or OAuth (). We can leverage numerous third-party services (e.g., Google, Twitter, GitHub) or become service providers ourselves (e.g., provide a public API). + + In this practical book, I dedicate the whole chapter to matters of authorization, authentication, OAuth, and best practices. We'll look at the following topics: - Authorization with Express.js middleware - Token-based authentication - Session-based authentication -- Project: adding e-mail and password login to Blog +- Project: Adding e-mail and password login to Blog - Node.js OAuth - Project: Adding Twitter OAuth 1.0 sign-in to Blog with Everyauth () # Authorization with Express.js Middleware -Authorization in web apps usually means restricting certain functions to privileged clients. These functions can either be methods, pages, or REST API end points. +Authorization in web apps usually means restricting certain functions to privileged clients. These functions can either be methods, pages, or REST API endpoints. Express.js middleware allows us to apply certain rules seamlessly to all routes, groups of routes (namespacing), or individual routes. @@ -31,7 +33,7 @@ app.get('/api/users', users.list) app.post('/api/users', users.create) ``` -Interestingly enough, `app.all()` with a URL pattern and an `*` is functionally the same as utilizing `app.use()` with a URL in a sense that they both will be triggered only on those URLs that are matching the URL pattern. +Interestingly enough, `app.all()` with a URL pattern and an `*` is functionally the same as utilizing `app.use()` with a URL in a sense that they both will be triggered only on those URLs that are matching the URL pattern: ```js app.use('/api', auth) @@ -45,7 +47,7 @@ app.get('/api/users', auth, users.list) // Auth needed app.post('/api/users', auth, users.create) // Auth needed ``` -In the previous examples, `auth()` is a function with three parameters: `req`, `res`, and `next`—for example in this middleware, you can call OAuth service or query a database to get the user profile to authorize it (check for permissions) or to check for JWT or web session to authenticate the user (who is it). Or most likely do both! +In the previous examples, `auth()` is a function with three parameters: `req`, `res`, and `next`. For example in this middleware, you can call the OAuth service or query a database to get the user profile to *authorize* it (check for permissions) or to check for JWT (JSON Web Tokens) or web session to *authenticate* the user (check who it is). Or, most likely, do both! ```js const auth = (req, res, next) => { @@ -56,17 +58,17 @@ const auth = (req, res, next) => { } ``` -The `next()` part is important, because this is how Express.js proceeds to execute subsequent request handlers and routes (if there’s a match in a URL pattern). If `next()` is invoked without anything, then the normal execution of the server will proceed. That is Express will go to the next middleware and then to the routes which match the URL. +The `next()` part is important, because this is how Express.js proceeds to execute subsequent request handlers and routes (if there’s a match in a URL pattern). If `next()` is invoked without anything, then the normal execution of the server will proceed. That is Express will go to the next middleware and then to the routes that match the URL. -If `next()` is invoked with an error object such as `next(new Error('Not authorized'))`, then Express will jump straight to the first error handler and none of the subsequent middleware or routes will be executed. +If `next()` is invoked with an error object such as `next(new Error('Not authorized'))`, then Express will jump straight to the first error handler, and none of the subsequent middleware or routes will be executed. # Token-Based Authentication -For applications to know which privileges a specific client has (e.g., admin), we must add an authentication step. In the previous example, this step goes inside the `auth()` function. +For applications to know which privileges a specific client has (e.g., admin), we must add an authentication step. In the previous example, this step went inside the `auth()` function. -The most common authentication is a cookie & session–based authentication, and the next section deals with this topic. However, in some cases, more REST-fulness is required, or cookies/sessions are not supported well (e.g., mobile). In this case, it’s beneficial to authenticate each request with a token (probably using the OAuth2.0 ()scheme). The token can be passed in a query string or in HTTP request headers. Alternatively, we can send some other authentication combination of information, such as e-mail/username and password, or API key, or API password instead of a token. +The most common authentication is a cookie&session–based authentication, and the next section deals with this topic. However, in some cases, more REST-fulness is required, or cookies/sessions are not supported well (e.g., mobile). In this case, it’s beneficial to authenticate each request with a token (probably using the OAuth2.0 () scheme). The token can be passed in a query string or in HTTP request headers. Alternatively, we can send some other authentication combination of information, such as e-mail/username and password, or API key, or API password, instead of a token. -So, in our example of token-based authentication, each request can submit a token in a query string (accessed via `req.query.token`). And, if we have the correct value stored somewhere in our app (database, or in this example just a constant `SECRET_TOKEN`), we can check the incoming token against it. If the token matches our records, we call `next()` to proceed with the request executions, if not then we call `next(error)` which triggers Express.js error handlers execution (see the note below): +In our example of token-based authentication, each request can submit a token in a query string (accessed via `req.query.token`). And if we have the correct value stored somewhere in our app (database, or in this example just a constant `SECRET_TOKEN`), we can check the incoming token against it. If the token matches our records, we call `next()` to proceed with the request executions; if not, then we call `next(error)`, which triggers Express.js error handler execution (see the upcoming note): ```js const auth = (req, res, next) => { @@ -81,21 +83,23 @@ const auth = (req, res, next) => { ``` -In a more realistic example, we use API keys and secrets to generate HMAC-SHA1 (hash-based message authentication code- secure hash algorithm strings, then compare them with the value in `req.query.token`. +In a more realistic example, we use API keys and secrets to generate HMAC-SHA1 (hash-based message authentication code—secure hash algorithm strings), and then compare them with the value in `req.query.token`. -**Note** Calling `next()` with an error argument is analogous to throwing in the towel (i.e., to give up). The Express.js app enters the error mode and proceeds to the error handlers. +**Note** Calling `next()` with an error argument is analogous to throwing in the towel (i.e., giving up). The Express.js app enters the error mode and proceeds to the error handlers. -We just covered token-based authentication, which is often used in REST APIs. However, the user-facing web apps (i.e., browser-enabled users & consumers) come with cookies. We can use cookies to store and send session IDs with each request. +We just covered the token-based authentication, which is often used in REST APIs. But user-facing web apps (i.e., browser-enabled users & consumers) often use with cookies. We can use cookies to store and send session IDs with each request. Cookies are similar to tokens, but require less work for us, the developers! This approach is the cornerstone of session-based authentication. The session-based method is the recommended way for basic web apps, because browsers already know what to do with session headers. In addition, in most platforms and frameworks, the session mechanism is built into the core. So, let’s jump straight into session-based authentication with Node.js. # JSON Web Token (JWT) Authentication -JSON Web Tokens (JWT) allow to send and receive data which is encrypted and has all the necessary information, not just a token such as an API key. Thus, there's no need to store user information on the server. In my opinion, JWT less secure than have the data on the server and only storing the token (API key or session ID) on the client, because while JWT is encrypted anyone can break any encryption given enough time and CPU (albeit it might take 1000s of years). +Developers use JSON Web Tokens (JWT) to encrypted data, which is then stored on the client. JWTs have all the any information unlike regular tokens (API keys or OAuth access tokens), which are more like passwords. Thus, JWTs remove the need for a database to store user information. -Nevertheless, JWT is a very common technique to use for implementing web apps. They eliminate the need for the server-side database or store. All info is in this token. It has three parts: header, payload and signature. The encryption method (algorithm) vary depending on what you choose. It can be HS256, RS512, ES384, etc. I'm always paranoid about security so typically the higher the number the better. +In my opinion, JWT is less secure than web sessions. This is because web sessions store the data on the server (usually in a database) and only store a session ID on the client. Despite JWT using encryption, anyone can break any encryption given enough time and processing power. -To implement a simple JWT login, let's use `jsonwebtoken` library for signing tokens and `bcrypt` for hashing passwords. When client wants to create an account, the system will take the password and hash it asynchronously so not to block the server from processing other requests (the slower the hashing the worse for attackers and the better for you). For example, this is how to get the password from the incoming request body and store the hash into the `users` array: +Nevertheless, JWT is a very common technique that frontend web apps developers use. JWTs eliminate the need for the server-side database or a store. All info is in this token, which has three parts: header, payload and signature. Whereas the structure of JWT is the same, the encryption method can vary depending on what a developer's choice: HS256, RS512, ES384, and so on. I'm always paranoid about security, so the stronger the algorithm, the better. RS512 will be good for most of the cases circa 2020. + +To implement a simple JWT login, let's use the `jsonwebtoken` library for signing tokens and `bcrypt` for hashing passwords. When a client wants to create an account, the system takes the password and hashes it asynchronously so as not to block the server from processing other requests The slower the hashing, the worse for attackers and the better for you. For example, this is how to get the password from the incoming request body and store the hash into the `users` array using 10 rounds of hashing, which is good enough: ```js app.post('/auth/register', (req, res) => { @@ -111,9 +115,9 @@ app.post('/auth/register', (req, res) => { ``` -Once the user record is created (which has the hash), we can login users to exchange the username and password for the JWT. They'll use this JWT for all other requests like a special key to authenticate and maybe unlock protected and restricted resources (that's authorization because not all users will have access to all the restricted resources). +Once the user record is created (which has the hash), we can log in users to exchange the username and password for the JWT. They'll use this JWT for all other requests like a special key to authenticate and maybe unlock protected and restricted resources (that's *authorization* because not all users will have access to all the restricted resources). -The GET is not a protected route but POST is a protected route because there's an extra `auth` middleware there which will check for the JWT: +The GET route is not a protected route, but POST is a protected one, because there's an extra `auth` middleware there that will check for the JWT: ```js app.get('/courses', (req, res) => { @@ -125,7 +129,7 @@ app.post('/courses', auth, (req, res) => { }) ``` -The login route checks for the presence of this username in the `users` array but this can be a database call or a call to another API not a simple `find()` method. Next, `bcrypt` has a `compare()` method which asynchronously checks for the hash. If they match (`matched == true`), then `jwt.sign` will issue a signed (encrypted) token which has username in it but can have many other fields not just one field. The `SECRET` string will be populated from the environment variable or from a public key later when the app goes to production. It's a `const` string for now. +The login route checks for the presence of this username in the `users` array but this can be a database call or a call to another API, not a simple `find()` method. Next, `bcrypt` has a `compare()` method that asynchronously compares the hash with the plain password. If they match (`matched == true`), then `jwt.sign()` will issue a signed (encrypted) token that has the username in it. (It can have many other fields, not just one field.) ```js app.post('/auth/login', (req, res) => { @@ -143,7 +147,13 @@ app.post('/auth/login', (req, res) => { }) ``` -When you get this JWT you can make requests to POST /courses. The `auth` which check for JWT uses the `jwt` module and the data from the headers. The name of the header doesn't matter that much because I set the name myself in the `auth` middleware. Some developers like to use `Authorization` but it's confusing to me since we are not authorizing, but authenticating. The authorization, which is who can do what, is happening in the Node middleware. Here we are performing authentication which is who is this. It's me. Azat Mardan. +JWT uses a special value `SECRET` to encrypt the data. Preferably when the app goes to production, an environment variable or a public key will populate the `SECRET` value. However now, `SECRET` is just a hard-coded `const` string. + +When you get this JWT, you can make requests to POST `/courses`. The `auth`, which checks for JWT, uses the `jwt` module and the data from the headers. I use the `auth` header name. The name of the header doesn't matter as long as you use the same name on the server and on the client. For the server, I set the header name in the `auth` middleware. + +Some developers like to use `Authorization`, but it's confusing to me since we're not authorizing, but authenticating. The authorization, which controls who can do what, is happening in the Node middleware. Here, we are performing authentication, which identifies who is this. + +My `auth` header will look like this `JWT TOKEN_VALUE`. Ergo, to extract the token value out of the header, I use a string function `split(' ')`: ```js const auth = (req, res, next) => { @@ -159,18 +169,17 @@ const auth = (req, res, next) => { } ``` -You can play with the full working and tested code in `code/ch6/jwt-example`. I like to use CURL but most of my Node workshop attendees like Postman so in Figure 6-2 I show how to use Postman to extract the JWT (on login). And on Figure 6-3 put it to action (on POST /courses) by pasting the token into the header `auth` after `JTW ` (JWT with a space). +You can play with the full working and tested code in `code/ch6/jwt-example`. I like to use CURL, but most of my Node workshop attendees like Postman (a cross-platform GUI app), so in Figure 6-2 I show how to use Postman to extract the JWT (on login). And Figure 6-3 uses the token on POST `/courses` by having the token in the header `auth` after JWT with a space (`JTW TOKEN_VALUE`). -This is how to test the JWT example step by step in Postman (or any other HTTP client): - -1. GET /courses will return a list of two courses which are in `server.js` -2. POST /courses with JSON data of `{"title": "blah blah blah"}` will return 401 Not Authorized. Now we know that this is a protected route and we need to create a new user to proceed -3. POST /auth/register with username and password will create a new user as shown in Figure 6-1. Next we can login (sign in) to the server to get the token -4. POST /auth/login with username and password which are matching existing records will return JWT as shown in Figure 6-2 -5. POST /courses with title and JWT in the header will allow and create a new course recored (status 201) as shown in Figure 6-3 and Figure 6-4 -6. GET /courses will show your new title. Verify it. No need for JWT for this request but it won't hurt either. Figure 6-5. -6. Celebrate and get a cup of tea with a (paleo) cookie. +We finished the implementation. Now test the JWT example with these step-by-step instructions in CURL, Postman or any other HTTP client: +1. GET `/courses` will return a list of two courses that are hard-coded in `server.js`. +2. POST `/courses` with JSON data `{"title": "blah blah blah"}` will return 401 Not Authorized. Now we know that this is a protected route, and we need to create a new user to proceed. +3. POST `/auth/register` with username and password will create a new user, as shown in Figure 6-1. Next we can log in to the server to get the token. +4. POST `/auth/login` with username and password that match the existing records will return JWT, as shown in Figure 6-2. +5. POST `/courses` with title and JWT from step 4 in the `auth` header will create a new course (response status 201), as shown in Figures 6-3 and 6-4. +6. GET `/courses` will show your new title. Verify it. No need for JWT for this request, but it won't hurt either. Figure 6-5. +7. Celebrate and get a cup of tea with a (paleo) cookie 🍪. ![Registering a new user by sending JSON payload](media/jwt-1.png) ***Figure 6-1.** Registering a new user by sending JSON payload* @@ -178,6 +187,8 @@ This is how to test the JWT example step by step in Postman (or any other HTTP c ![Logging in to get JWT](media/jwt-2.png) ***Figure 6-2.** Logging in to get JWT * +Don't forget to select `raw` and `application/json` when registering (POST `/auth/register`) and when making other POST requests. And now that you saw my password, please don't hack my accounts (). + ![Using JWT in the header auth](media/jwt-3.png) ***Figure 6-3.** Using JWT in the header auth* @@ -187,32 +198,36 @@ This is how to test the JWT example step by step in Postman (or any other HTTP c ![Verifying new course](media/jwt-5.png) ***Figure 6-5.** Verifying new course* -Finally, you can uncheck the `auth` header which has the JWT value and try to make another POST /courses request as shown in Figure 6-6. The request will fail miserably (401) as it should because there's no JWT this time (see `auth` middleware in `server.js`). +Finally, you can uncheck the `auth` header that has the JWT value and try to make another POST `/courses` request, as shown in Figure 6-6. The request will fail miserably (401), as it should because there's no JWT this time (see `auth` middleware in `server.js`). ![Unchecking auth header with JWT leads to 401 as expected](media/jwt-6.png) ***Figure 6-6.** Unchecking auth header with JWT leads to 401 as expected* -Don't forget to select `raw` and `application/json` when registering (POST /auth/register) and when making other POST requests. And now that you saw and know my password, please don't steal it. (It's not my actual password, but someone used dolphins as a password according to [this pull request "Remove my password from lists so hackers won't be able to hack me"](https://github.com/danielmiessler/SecLists/pull/155)). -JWT is easy to implement. Once on the client after the login request, you can store JWT in local storage or cookies (in the browser) so that your React, Vue, or Angular front-end app can send this token with each request. Protect your secret and pick a strong encryption algorithms to make it harder for attachers to hack your JWT data. +JWT is easy to implement. Developers don't need to create and maintain a shared database for the services. That's the main benefit. Clients get JWTs after the login request. + +Once on the client, client code stores JWT in browser or mobile local storage or cookies (also in the browser). React, Vue, Elm, or Angular front-end apps send this token with each request. If you plan to use JWT, it's important to protect your secret and to pick a strong encryption algorithm to make it harder for attackers to hack your JWT data. -For me sessions are somewhat more secure because I store my data on the server, on encrypted on the client. +If you ask me, sessions are more secure because with sessions I store my data *on the server* instead of on the client. Let's talk about sessions. # Session-Based Authentication Session-based authentication is done via the `session` object in the request object `req`. A web session in general is a secure way to store information about a client so that subsequent requests from that same client can be identified. -In the Express.js file, we'll need to import (`require()`) two modules to enable sessions. We need to include and use `cookie-parser` and `express-session`. +In the main Express.js file, we'll need to import (`require()`) two modules to enable sessions. We need to include and use `cookie-parser` and `express-session`: -1. `express.cookieParser()`: allows for parsing of the client/request cookies -2. `express.session()`: exposes the `res.session` object in each request handler, and stores data in the app memory or some other persistent store like MongoDB or Redis +1. `express.cookieParser()`: Allows for parsing of the client/request cookies +2. `express.session()`: Exposes the `res.session` object in each request handler, and stores data in the app memory or some other persistent store like MongoDB or Redis Note: in `express-session` version 1.5.0 and higher, there's no need to add the `cookie-parser` middleware. In fact, it might lead to some bad behavior. So it's recommended to use `express-sesison` by itself because it will parse and read cookie by itself. -Needless to say, `cookie-parser` and `express-session` must be installed via npm into the project's `node_modules` folder, i.e., you need to install them with `npm i cookie-parser express-session -SE`. +Needless to say, `cookie-parser` and `express-session` must be installed via npm into the project's `node_modules` folder. You can install them with: +``` +npm i cookie-parser express-session -SE +``` -Import with `require()` and apply to the Express app with `app.use()`: +In the main Express file such as `app.js` or `server.js`, import with `require()` and apply to the Express app with `app.use()`: ```js const cookieParser = require('cookie-parser') @@ -223,11 +238,12 @@ app.use(session()) ``` -The rest is straightforward. We can store any data in `req.session` and it appears automagically on each request from the same client (assuming their browser supports cookies). Hence, the authentication consists of a route that stores some flag (true/false) in the session and of an authorization function in which we check for that flag (if true, then proceed; otherwise, exit). For example, +The rest is straightforward. We can store any data in `req.session` and it appears automagically on each request from the same client (assuming their browser supports cookies). Hence, the authentication consists of a route that stores some flag (true/false) in the session and of an authorization function in which we check for that flag (if true, then proceed; otherwise, exit). For example to log in, we set the property `auth` on the `session` to `true`. The `req.session.auth` value will persist on future requests from the same client. ```js app.post('/login', (req, res, next) => { - if (checkForCredentials(req)) { // This function checks for credentials passed in the request's payload + if (checkForCredentials(req)) { + // checkForCredentials checks for credentials passed in the request's payload req.session.auth = true res.redirect('/dashboard') // Private resource } else { @@ -236,18 +252,20 @@ app.post('/login', (req, res, next) => { }) ``` -**Warning** Avoid storing any sensitive information in cookies. The best practice is not to store any info in cookies manually—except session ID, which Express.js middleware stores for us automatically—because cookies are not secure. Also, cookies have a size limitation (depending on the browser, with Internet Explore being the stringiest) that is very easy to reach. +**Warning** Avoid storing any sensitive information in cookies. The best practice is not to store any info in cookies manually—except session ID, which Express.js middleware stores for us automatically—because cookies are not secure. Also, cookies have a size limitation that is very easy to reach and which varies by browser with Internet Explore having the smallest limit. -By default, Express.js uses in-memory session storage. This means that every time an app is restarted or crashes, the sessions are wiped out. To make sessions persistent and available across multiple servers, we can use Redis for MongoDB for as session restore. +By default, Express.js uses in-memory session storage. This means that every time an app is restarted or crashes, the sessions are wiped out. To make sessions persistent and available across multiple servers, we can use a database such as Redis or MongoDB as a session store that will save the data on restarts and crashes of the servers. + +In fact, having Redis for the session store is one of the best practices that my team and I used at Storify and DocuSign. Redis provided one source of truth for the session data among multiple servers. Our Node apps were able to scale up well because they were stateless. We also used Redis for caching due to its efficiency. # Project: Adding E-mail and Password Login to Blog To enable session-based authentication in Blog, we need to do the following: 1. Import and add the session middleware to the configuration part of `app.js`. -2. Implement the authorization middleware `authorize` with a session-based authorization so we can re-use the same code for many routes -3. Add the middleware from #2 (step above) to protected pages and routes in `app.js` routes, e.g., `app.get('/api/, authorize, api.index)`. -4. Implement an authentication route POST `/login`, and a logout route, GET `/logout` in `user.js`. +2. Implement the authorization middleware `authorize` with a session-based authorization so we can reuse the same code for many routes. +3. Add the middleware from step 2 to protected pages and routes in `app.js` routes, e.g., `app.get('/api/, authorize, api.index)`. +4. Implement an authentication route POST `/login`, and a logout route, GET `/logout`, in `user.js`. We will start with the session middleware. @@ -266,9 +284,9 @@ app.use(session({secret: '2C44774A-D649-4D44-9535-46E296EF984F'})) **Warning** You should replace randomly generated values with your own ones. -`session()` must be preceded by `cookieParser()` because session depend on cookies to work properly. For more information about these and other Express.js/Connect middleware, refer to Pro Express.js 4 [Apress, 2014]. +`session()` must be preceded by `cookieParser()` because session depends on cookies to work properly. For more information about these and other Express.js/Connect middleware, refer to *Pro Express.js 4* (Apress, 2014). -Beware of another cookies middleware. It's name is `cookie-sesison`. It's not as secure as `cookie-parser` and `express-session`. `cookie-session` can be used in some cases but I don't recommend it because it stores all information in the cookie, not on the server. The usage is import and apply: +Beware of another cookie middleware. It's name is `cookie-sesison` and it's not as secure as `cookie-parser` with `express-session`. This is because `cookie-session` stores all information in the cookie, not on the server. `cookie-session` can be used in some cases but I do not recommend it. The usage is to import the module and to apply it to the Express.js `app`: ```js const cookieSession = require('cookie-session') @@ -291,7 +309,7 @@ Let's add authorization to the Blog project. ## Authorization in Blog -Authorization is also done via middleware, but we won’t set it up right away with `app.use()` like we did in the snippet for `res.locals`. Instead, we define a function that checks for `req.session.admin` to be true, and proceeds if it is. Otherwise, the 401 Not Authorized error is thrown and the response is ended. +Authorization is also done via middleware, but we won’t set it up right away with `app.use()` like we did in the snippet for `res.locals`. Instead, we define a function that checks for `req.session.admin` to be true, and proceeds if it is. Otherwise, the 401 Not Authorized error is thrown, and the response is ended. ```js // Authorization @@ -303,7 +321,7 @@ const authorize = (req, res, next) => { } ``` -Now we can add this middleware to certain protected endpoints (another name for routes). Specifically, we will protect the endpoints to see the admin page (GET `/admin`), to create a new article (POST `/post`) and to see the create new article page (GET `/post`): +Now we can add this middleware to certain protected endpoints (another name for routes). Specifically, we will protect the endpoints to see the admin page (GET `/admin`), to create a new article (POST `/post`), and to see the create new article page (GET `/post`): ```js app.get('/admin', authorize, routes.article.admin) @@ -311,7 +329,7 @@ app.get('/post', authorize, routes.article.post) app.post('/post', authorize, routes.article.postArticle) ``` -We add the authorize middleware to API routes as well... to *all* of them using `app.all()`: +We add the authorize middleware to API routes as well... to *all* of them, using `app.all()`: ```js app.all('/api', authorize) @@ -321,9 +339,9 @@ app.put('/api/articles/:id', routes.article.edit) app.delete('/api/articles/:id', routes.article.del) ``` -The `app.all('/api', authorize)` is a more compact alternative to adding `authorize` to all `/api/...` routes manually. Less copy-paste and more code re-usage please. +The `app.all('/api', authorize)` statement is a more compact alternative to adding `authorize` to all `/api/...` routes manually. Less copypasta and more code reuse, please. -I know there are a lot of readers who like to see entire source code. Thus, the full source code of the `app.js` file after adding session support and authorization middleware is as follows (under the `ch6/blog-password` folder): +I know a lot of readers like to see the entire source code. Thus, the full source code of the `app.js` file after adding session support and authorization middleware is as follows (under the `ch6/blog-password` folder): ```js const express = require('express') @@ -443,7 +461,7 @@ Now we can implement authentication (different from authorization). The last step in session-based authorization is to allow users and clients to turn the `req.session.admin` switch on and off. We do this by having a login form and processing the POST request from that form. -For authenticating users as admins we set the appropriate flag (`admin=true`), in the `routes.user.authenticate` in the `user.js` file. This is done in the POST `/login` route which we defined in the `app.js`—a line that has this statement: +For authenticating users as admins, we set the appropriate flag (`admin=true`), in the `routes.user.authenticate` in the `user.js` file. This is done in the POST `/login` route, which we defined in the `app.js`—a line that has this statement: ``` app.post('/login', routes.user.authenticate) @@ -457,7 +475,7 @@ exports.authenticate = (req, res, next) => { The form on the login page submits data to this route. In general, a sanity check for the input values is always a good idea. If values are falsy (including empty values), we'll render the login page again with the message `error`. -The `return` keyword ensures the rest of the code in this method isn’t executed. If the values non-empty (or otherwise truthy), then the request handler will not terminate yet and proceed to the next statements: +The `return` keyword ensures the rest of the code in this method isn’t executed. If the values are non-empty (or otherwise truthy), then the request handler will not terminate yet and will proceed to the next statements: ```js exports.authenticate = (req, res, next) => { @@ -485,7 +503,7 @@ Thanks to the database middleware in `app.js`, we can access database collection if (!user) return res.render('login', {error: 'Incorrect email&password combination.'}) ``` -If the program has made it thus far (avoided a lot of `return` statements previously), we can authenticate the user as administrator thus enabling the authentication and the `auth` (authorization) method: +If the program has made it thus far (avoiding a lot of `return` statements prior), we can authenticate the user as administrator, thus enabling the authentication and the `auth` (authorization) method: ```js req.session.user = user @@ -539,33 +557,39 @@ It's better to test the enhancements earlier. Everything should be ready for run ## Running the App -Now everything should be set up properly to run Blog. Contrary to the example in Chapter 5, we see protected pages only when we’re logged in. These protected pages enable us to create new posts, and to publish and unpublish them. But as soon as we click "Logout" in the menu, we no longer can access the administrator page. +Now everything should be set up properly to run Blog. In contrast, to the example in Chapter 5, we see protected pages only when we’re logged in. These protected pages enable us to create new posts, and to publish and unpublish them. But as soon as we click Logout in the menu, we no longer can access the administrator page. The executable code is under the `code/ch6/blog-password` folder of the `practicalnode` repository: https://github.com/azat-co/practicalnode. -# Node.js OAuth +# The `oauth` Module -OAuth ([npm](https://www.npmjs.org/package/oauth) (), [GitHub](https://github.com/ciaranj/node-oauth) ()) is the powerhouse of OAuth 1.0/2.0 schemes for Node.js. It’s a module that generates signatures, encryptions, and HTTP headers, and makes requests. +The `oauth` module is the powerhouse of OAuth 1.0/2.0 schemes and flows for Node.js. It’s a module that generates signatures, encryptions, and HTTP headers, and makes requests. You can find it on npm at and on GitHub at . -We still need to initiate the OAuth dances (i.e., requests back and forth between consumer, provider and our system), write the callback routes, and store information in sessions or databases. Refer to the service provider’s (e.g., Facebook, Twitter, Google) documentation for end points, methods, and parameter names. +We still need to initiate the OAuth flows (i.e., requests back and forth between consumer, provider, and our system), write the callback routes, and store information in sessions or databases. Refer to the service provider’s (e.g., Facebook, Twitter, Google) documentation for endpoints, methods, and parameter names. It is recommended that `node-auth` be used when complex integration is needed or when only certain pieces of OAuth are needed (e.g., header signatures are generated by node-auth, but the request is made by the `superagent` library). -To add OAuth version 0.9.15 (the latest as of this writing) to your project, simply run: +To add OAuth version 0.9.15 (the latest as of this writing) to your project, simply say the following incantation: ``` $ npm install oauth@0.9.15 ``` +Once you install the `oauth` module, you can start implementing OAuth flows such as Twitter OAuth 2.0. + ## Twitter OAuth 2.0 Example with Node.js OAuth -OAuth 2.0 is less complicated and, some might argue, less secure than OAuth 1.0. The reasons for this are numerous and better understood when written by Eran Hammer, the person who participated in OAuth2.0 creation: OAuth 2.0 and the Road to Hell. +OAuth 2.0 is less complicated and, some might argue, less secure than OAuth 1.0. You can find plenty of blog posts, flame wars and rants on OAuth 1 vs 2 online, if you wish. I'll give you my short version here. -In essence, OAuth 2.0 is similar to the token-based authorization we examined earlier, for which we have a single token, called a *bearer*, that we pass along with each request. To get that token, all we need to do is exchange our app’s token and secret for the bearer. +In essence, OAuth 2.0 doesn't prescribe encryption and instead relies on SSL (https) for encryption. On the other hand, OAuth 1 dictates the encryption. -Usually, this bearer can be stored for a longer time than OAuth 1.x tokens (depends on the rules set by a specific service-provider), and can be used as a single key/password to open protected resources. This bearer acts as our token in the token-based auth. +The way OAuth 2.0 works is similar to the token-based authorization we examined earlier, for which we have a single token, called a *bearer*, that we pass along with each request. Think about bearer as a special kind of a password that unlocks all the treasures. To get that token, all we need to do is exchange our app’s token and secret for the bearer. -Here’s an ordinary example from [Node.js OAuth](https://github.com/ciaranj/node-oauth#oauth20) (). (`node-auth`) docs. First, we create an `oauth2` object that has a Twitter consumer key and secret (replace the values with yours): +Usually, this bearer can be stored for a longer time than OAuth 1.x tokens (depending on the rules set by a specific service provider) and can be used as a single key/password to open protected resources. This bearer acts as our token in the token-based auth. + +The following is an OAuth 2.0 request example, which I wrote for the `oauth` docs: . It'll illustrate how to make an OAuth 2 request to Twitter API. + +First, we create an `oauth2` object that has a Twitter consumer key and secret (replace the values with yours): ```js const OAuth = require('oauth') @@ -595,27 +619,29 @@ oauth2.getOAuthAccessToken( ) ``` -Now we can store the bearer for future use and make requests to protected end points with it. +Now we can store the bearer for future use and make requests to protected endpoints with it. -**Note** Twitter uses OAuth2.0 for the so called app-only authorizations which are requests to protected resources. Those requests are made on behalf of the applications only (not on behalf of users by the apps). Twitter uses OAuth 1.0 for normal auths, i.e., requests made on behalf of the users by the app). Not all endpoints are available via app-only auth, and quotas/limitations are different. Please refer to the official documentation at . +**Note** Twitter uses OAuth 2.0 for endpoints (resources) which don't require users permissions. These endpoints use what's called *app-only authorization*, because they are accessible on behalf of apps, not on behalf of users of apps. Not all endpoints are available through app-only auth, and quotas/limitations are different. Conversely, Twitter uses OAuth 1.0 for authorization of requests made on behalf of the users of the apps. To learn what endpoints use OAuth 2 and what OAuth 1, please refer to the official documentation at . ## Everyauth -The Everyauth module allows for multiple OAuth strategies to be implemented and added to any Express.js app in just a few lines of code. Everyauth comes with strategies for most of the service providers, so there’s no need to search and implement service provider-specific end points, parameters names, and so forth. Also, Everyauth stores user objects in a session, and database storage can be enabled in a `findOrCreate` callback using a promise pattern. +The Everyauth module allows for multiple OAuth strategies to be implemented and added to any Express.js app in just a few lines of code. Everyauth comes with strategies for most of the service providers, so there’s no need to search and implement service provider-specific endpoints, parameters names, and so forth. Also, Everyauth stores user objects in a session, and database storage can be enabled in a `findOrCreate` callback using a promise pattern. + +**Tip** Everyauth has an e-mail and password strategy that can be used instead of the custom-built auth. More information about it can be found in the Everyauth documentation at the [GitHub repository](https://github.com/bnoguchi/everyauth#password-authentication) (). -**Tip** Everyauth has an e-mail and password strategy that can be used instead of the custom-built auth. More information about it can be found in Everyauth documentation at the [GitHub repository](https://github.com/bnoguchi/everyauth#password-authentication) (). +Each one of the third-party services may be different. You can implement them all yourself. But Everyauth has lots of submodules that implement exactly what OAuth flow each third-party service need. You simply provide credentials to submodules, configure them, and avoid any worries in regards to the details of OAuth flow(s). That's right, you just plug in your app secret and client ID and boom! You are rolling, all dandy like a candy. -Everyauth has lots fo submodules which describe how a service might use OAuth exactly. Each one of them might be different. If you just use one of this submodule then you don't need to worry about the details. Instead you just plug in your app secret and client ID and boom! You are rolling. In other words, submodules enable service provider-specific authorization strategies and there are tons of these submodules (strategies): password (simple email and password), Facebook, Twitter, Google, Google Hybrid, LinkedIn, Dropbox, Tumblr, Evernote, GitHub, Instagram, Foursquare, Yahoo!, Justin.tv, Vimeo, 37signals (Basecamp, Highrise, Backpack, Campfire), Readability, AngelList, Dwolla, OpenStreetMap, VKontakte (Russian social network famous for its pirated media), Mail.ru (Russian social network), Skyrock, Gowalla, TripIt, 500px, SoundCloud, mixi, Mailchimp, Mendeley, Stripe, Datahero, Salesforce, Box.net, OpenId, and event LDAP and Windows Azure Access Control Service! and more at . +Everyauth submodules are specific implementations of authorizations. And boy, open source contributors wrote tons of these submodules (strategies), so developers don't have to reinvent the wheel: password (simple email and password), Facebook, Twitter, Google, LinkedIn, Dropbox, Tumblr, Evernote, GitHub, Instagram, Foursquare, Yahoo!, Justin.tv, Vimeo, Basecamp, AngelList, Dwolla, OpenStreetMap, VKontakte (Russian social network famous for its pirated media), Mail.ru, SoundCloud, MailChimp, Stripe, Salesforce, Box.net, OpenId, LDAP and Windows Azure Access Control Service, and the list goes on and on at . # Project: Adding Twitter OAuth 1.0 Sign-in to Blog with Everyauth A typical OAuth 1.0 flow consists of these three steps (simplified): 1. Users go to a page/route to initiate the OAuth dance. There, our app requests a token via GET/POST requests using the signed app’s consumer key and secret. For example, `/auth/twitter` is added automatically by Everyauth. -2. The app uses the token extracted in step 1 and redirects users to the service-provider (Twitter) and waits for the callback. -3. The service provider redirects users back to the app which catches the redirect in the callback route (e.g., `/auth/twitter/callback`). Then, the app extracts the access token, the access token secret, and the user information from the Twitter incoming request body / payload. +2. The app uses the token extracted in step 1 and redirects users to the service provider (Twitter) and waits for the callback. +3. The service provider redirects users back to the app, which catches the redirect in the callback route (e.g., `/auth/twitter/callback`). Then, the app extracts the access token, the access token secret, and the user information from the Twitter incoming request body/payload. -However, because we’re using Everyauth, we don’t need to implement requests for the initiate and the callback routes! +However, because we’re using Everyauth, we don’t need to implement requests for the initiation and the callback routes! Let’s add a Sign in with Twitter button to our project. We need the button itself (image or a link), app key, and secret (obtainable at dev.twitter.com), and then we must augment our authorization route to allow for specific Twitter handlers to be administrated on Blog. @@ -667,7 +693,7 @@ const TWITTER_CONSUMER_KEY = process.env.TWITTER_CONSUMER_KEY const TWITTER_CONSUMER_SECRET = process.env.TWITTER_CONSUMER_SECRET ``` -To pass these variables we can use Makefile. In the Makefile, add these lines, substituting ABC and XYZ with your values: +To pass these variables, we can use Makefile. In the Makefile, add these lines, substituting ABC and XYZ with your values: ``` start: @@ -682,7 +708,7 @@ Also, add the `start` command to `.PHONY`: .PHONY: test db start ``` -As another option, we can create a Bash file `start.sh`: +As another option, we can create a bash file `start.sh`: ``` TWITTER_CONSUMER_KEY=ABCABC \ @@ -710,7 +736,7 @@ everyauth.twitter .consumerSecret(TWITTER_CONSUMER_SECRET) ``` -Then, to tell the module what to do when Twitter sends back the authorized user object `twitterUserMetadata`, type +Then, to tell the module what to do when Twitter sends back the authorized user object `twitterUserMetadata`, type this chained method with four arguments: ```js .findOrCreateUser((session, @@ -743,7 +769,7 @@ Store the `user` object in the in-memory session, just like we did in the `/logi session.user = twitterUserMetadata ``` -The most important, set admin flag to `true`: +Most importantly, set admin flag to `true`: ```js session.admin = true @@ -766,7 +792,7 @@ After all the steps are done, instruct Everyauth where to redirect the user: .redirectPath('/admin') ``` -Everyauth is so smart that it automatically adds a `/logout` route, this means our route (`app.get('/logout', routes.user.logout);`) won't be used. So we need to add some extra logic to the default Everyauth strategy. Otherwise, the session will always keep admin = true. In the `handleLogout` step, we clear our session by calling the exact same method from `user.js`: +Everyauth is so smart that it automatically adds a `/logout` route, which means our route (`app.get('/logout', routes.user.logout);`) won't be used. So we need to add some extra logic to the default Everyauth strategy. Otherwise, the session will always keep admin = true. In the `handleLogout` step, we clear our session by calling the exact same method from `user.js`: ```js everyauth.everymodule.handleLogout(routes.user.logout) @@ -780,29 +806,29 @@ everyauth.everymodule.findUserById( (user, callback) => { }) ``` -Last but not least, the line that follows, enable Everyauth routes and it must go after cookie and session middleware but must come before normal routes (e.g., `app.get(), app.post()`): +Last but not least, the line that follows enables Everyauth routes and it must go after cookie and session middleware, but must come before normal routes (e.g., `app.get(), app.post()`): ```js app.use(everyauth.middleware()) ``` -The full source code of the `code/ch6/blog-everyauth/app.js` file after adding the Everyauth Twitter OAuth1.0 strategy is rather lengthy thus is not listed here but can be found on GitHub. +The full source code of the `code/ch6/blog-everyauth/app.js` file after adding the Everyauth Twitter OAuth1.0 strategy is rather lengthy, so I won't print it here, but you can find it on GitHub at the book's repository. -To run the app, execute `$ make start`, and **don’t forget to replace** the Twitter username, consumer key, and secret with yours. Then when you click on "Sign in with Twitter", you'll be redirected to Twitter to authorize this application. Then you'll be redirected back to the localhost app and should see the admin page menu. We have been authorized by a third-party service provider! +To run the app, execute `$ make start`, and **don’t forget to replace** the Twitter username, consumer key, and secret with yours. Then when you click on the "Sign in with Twitter" button, you'll be redirected to Twitter to authorize this application. After that you'll be redirected back to the localhost app and should see the admin page menu. We have been authorized by a third-party service provider! -Also, the user information is available to your app so it can be stored in the database for future usage. If you already gave permissions, the redirect to and from Twitter might happen very fast. The terminal output is shown in Figure 6-1 shows each step of Everyauth process such as getting tokens and sending responses. Each step can be customized to your app's needs. +Also, the user information is available to your app so it can be stored in the database for future use. If you already gave permissions, the redirect to and from Twitter might happen very fast. I captured the terminal output in Figure 6-7. The logs show each step of Everyauth process such as getting tokens and sending responses. You can customize each step. ![alt](media/image1.png) -***Figure 6-1.** Everyauth Twitter strategy with debug mode in action* +***Figure 6-7.** Everyauth Twitter strategy with debug mode in action* Auths are important. Good job. # Summary -In this chapter, we learned how to implement a standard e-mail and password authentication, and used Express.js middleware to protect sensitive pages and end points in Blog. Then, we covered OAuth 1.0 and OAuth 2.0 with Everyauth and OAuth modules, respectively. +In this chapter, we learned how to implement standard e-mail and password authentication, and used Express.js middleware to protect sensitive pages and endpoints in Blog. Then, we covered OAuth 1.0 and OAuth 2.0 with Everyauth and OAuth modules, respectively. -Now we have a few security options for Blog. In the next chapter we'll explore Mongoose () object-relational mapping object-document mapping (ODM) Node.js library for MongoDB. +Now we have a few security options for Blog. In the next chapter, we'll explore Mongoose (), the object-relational mapping object-document mapping (ODM) Node.js library for MongoDB. -The Mongoose library is a good choice for complex systems with a lot of interdependent business logic between entities, because it completely abstracts the database and provides developers with tools to operate with data only via Mongoose objects. The chapter will touch on the main Mongoose classes and methods, explain some of the more advanced concepts, and re-factor persistence in Blog. +The Mongoose library is a good choice for complex systems with a lot of interdependent business logic between entities, because it completely abstracts the database and provides developers with tools to operate with data only via Mongoose objects. The chapter will touch on the main Mongoose classes and methods, explain some of the more advanced concepts, and refactor persistence in Blog. diff --git a/chapter9/chapter9.md b/chapter9/chapter9.md index f9ae2e84..05e32664 100755 --- a/chapter9/chapter9.md +++ b/chapter9/chapter9.md @@ -28,7 +28,7 @@ By maintaining a duplex open connection between the client and the server, updat There's no need to use any special libraries to use WebSocket in modern browsers. The following StackOverflow has a list of such browsers: [What browsers support HTML5 WebSockets API?](http://stackoverflow.com/questions/1253683/what-browsers-support-html5-websocket-api) ()For older browser support, the workaround includes falling back on polling. -As a side note, polling (both short and long), can also be used to emulate the real-time responsiveness of web apps. In fact, some advanced libraries (Socket.IO) fall back to polling when WebSocket becomes unavailable as a result of connection issues or users not having the latest versions of browsers. Polling is relatively easy and we don't cover it here. It can be implemented with just a `setInterval()` callback and an end point on the server. However, there's not real-time communication with polling; each request is separate. +As a side note, polling (both short and long), can also be used to emulate the real-time responsiveness of web apps. In fact, some advanced libraries (Socket.IO) fall back to polling when WebSocket becomes unavailable as a result of connection issues or users not having the latest versions of browsers. Polling is relatively easy and we don't cover it here. It can be implemented with just a `setInterval()` callback and an endpoint on the server. However, there's not real-time communication with polling; each request is separate. # Native WebSocket and Node.js with the ws Module Example