notes and code from finished 'API Design in Node.js' course 📚
Clone or download
Pull request Compare This branch is 10 commits ahead of FrontendMasters:step-1.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


Overview just creating a server can be taxing and difficult in allows you to do this stuff easier.

express sits on top of node, sits on top of the http module to make servers not so hard. theres stuff like strongloop that sits on top of express express is a good places to start because you dont have to worry about url parsing and small things, but you still

have to do some grunt work so its isnt magic outthe box like rails.

express is a framework that has routing and middleware. point blank.

express takes advantage

node is single threaded...everything is async and they use codebacks

its an takes advantage of that.

express allows us to register callbacks when a particular combination of http verbs and routes are hit. its a routing library with middleware

^ thats all it does!!!!!!!

express uses middleware to modify and inspect the incoming request..ex: do something, pass it to the next middleware in the stack, or stop and dont move forward.

middleware is prob the most powerful feature in express aside from routing, which again is all express is.

you can think of middleware as the plugins for express..big community there, from parsing URLs, handling authentication, etc.

express can also be a static web server.

getting RESTful REST is a pattern...not baked into w3c or anything. modern web is based around REST.
  • should be stateless = aka when I make a req to a server, it shouldnt pass around info from prev. request
  • should use http verbs explicitly
  • expose a directory like URL pattern for our routes...
  • used to transfer json and xml
Anatomy of a REST API - first model your data, what will the json object look like? (instructor makes a 'lion' object) resources = data
  • access the routes to access the routes using http verbs

note: think I should revisit my URL strcuture of my APIs in my workspace project

exercise 2 app.use(express.static('static')); //this will server index.html on the root directory of /client

if you use app.use() it allows you to pass in a middleware function and that function will be executed in the order it was declared (learned from previous course).

bodyParser middleware allows us to post json to the server, by default expressJS doesnt know how. this will allow us to attach content in req.body to get the objects that you put/post to the server

app.use() is GLOBAL middleware

NOTE: its important to send back the resources you've created on post (look into your code and think about this!!!)

also theres a difference b/w designing an API for an App and an API as a service...

what is middleware express sits on top of http node module, and sets up events, the http module fires off events, express listens to those events and registers callbacks that fire to specific routes and for each route registered, it'll keep of middleware (which is just an array of functions) that it will call in order.

middleware = just a function with access to the req,res objects and the next() fn, which pushes to the next() fn in the middleware.

its just a stack of functions that dont go to the next one until we tell it to.

wow, nice and simple...

if a middleware doesnt call next(), it will just hang and you will error out. < perhaps this is an issue with your existing app???

Types of middleware
  • 5 diff types of middleware *
  • 3rd party (like bodyParser)
  • router level middleware
  • application level middleware (like app.use())
  • error handling middleware: a little different than regular middleware in that it takes an error as a fn argument: function(err, req, res, next) {}
  • built-in, like express.static()...not many, but theyre on the express object

interesting: app.get() fns are siblings, not descendents...obvious, but good to point out.

all middleware is just a fn

middleware examples

app.use(morgan()); <-- this is app level

app.get('/todos', checkAuth(), function(req, res) { <-- `checkAuth()` is route level middleware


we'll use 3rd party and built-in middleware a lot, but we'll also be creating our own middleware: EX:

app.use(function(req, res, next) {
  if ( === 'catnip') {
  } else {
    res.status(401).send({message: 'nope'});

now in our exercise we'll create a lion middleware that checks if the 'ID' param is given, and if so, the middleware will find the specific lion using its ID and pass over to the route hit:


app.param('id', function(req, res, next, id) {
  // fill this out to find the lion based off the id
  var lion = _.find(lions, {id:});
  // and attach it to req.lion. Rember to call next()
  req.lion = lion;

note: app.param is checking for 'id'. ^ OOOH you could use this in your own project~~~~!!!!


router = server inside of a server?

It uses its parent's middleware (app.use()), but also its own middleware.

think of a router as a module with its own middleware stack and functionality.

great for more fine-grained control over resources and versioning our APIs.

so like 'app.use()' but on a local rather than global level.

a good pattern is to define router like express.Router();


var express = require('express');
var app = express();
var todosRouter = express.Router();

todosRouter.get('/', function(req, res) {
  // '/' the root of todosRouter is actually '/todos' so thats what we're actually hitting
  // aka doesnt trigger on hitting '/' but on hitting '/todos'
  // so '/' <-- ROOT is not '/', its the root.

app.use('/todos', todosRouter);
//^ this is called mounting...whenever you hit this url, use this router..

this can do todos.use, todos.params, etc the same we would use app.use, app.get, etc

in our exercise, we create an additional router for 'tigers' using the existing router of 'lions' as an example (look at branch step-4). This shows how we can create additional router modules, and as many as we need to fit our needs.

this is cool because instead of writing a GET request in server.js like /tigers/:id, we have a different router used whenever /tigers/* gets hit. EX: (in server.js)

app.use('/lions', lionRouter);
app.use('/tigers', tigerRouter);

in my previous app, I have just used one router like:

app.use('/', router);

so this is interesting to see...SUBROUTERS!!!

also INTERESTING: if you want something to be available globally, you can attach it to the globalo object like:

global.config = {};

so now in any file you can call config and add to it....this could be USEFUL!! especially for my current project (workspace).

global object is like the window object of node...but TYPICALLY YOU DONT WANT TO USE IT..DONT POLLUTE GLOBAL SCOPE.

so basically what we've done is create appropriate sub routers for 'lions' and 'tigers'.

Error handling

we can actually clean up our routes a bit using routerName.router('/path/to')..lets take a look at how the root path '/' is called in both GET and POST methods:

tigerRouter.get('/', function(req, res){
});'/', updateId, function(req, res) {
  var tiger = req.body;



we can turn this into:

  .get('/', function(req, res){
  .post('/', updateId, function(req, res) {
    var tiger = req.body;

^ totally use this in the new project!!!!

why make subrouters? it allows us to abstract a different piece of functionality...different subrouters could be used in different concerns, and/or different people on teams...

subrouters are kinda like applications within applications, and a host of good things can occur with that sort of functionality.

Testing in node unit tests are similar to how you would test in the browser minus the DOM. ^ aka 'this function does this, and this function does that'

integration testing is where we start testing the actual API and what it does when we hit the routes. ^ aka 'when i hit this route, what does the db respond with?'

we'll do some integration testing to see what our API responds to when we throw diff. params at it..

NOTE: a good practice with node and express is to export the app before starting it...this means that instead of using app.listen(3000) is to export the app so our tests can use the app and all the routes on it...this way we can have the tests start the server for us.

^ seems kinda weird. but the idea is export the whole app so our tests can use it. that sounds good.

strategies for testing our API:

  • postman, httpie, etc....we were just doing this in the last video, the difference is that now instead of us typing in those GET/POST routes and seeing what comes back, the tests will do that for us.

-we can use jasmine or mocha, an assertion library if we dont have one (jasmine has one), and something like Supertest to test our api...

supertest is built on top of the superagent framework...superagent is like ajax for your API.

we'll use mocha, chai, and supertest to do integration testing on the code we have written.

lets look at an example integration test:

var app = require('./app');
var request = require('supertest');

describe('todos', function() {

  it('should GET all todos', function(done) {
      .set('Accept', 'application/json')
      .expect('Content-Type', /json/)
      .end(function(err, resp) {

note: expect() is an assertion

mocha is just like jasmine except it doesnt come with an assertion library or a mocking library, etc.

if you want the full suite, use jasmine, if you want to use your own stuff, use mocha.

Node Environment Variables + Exercise 5 `process` is a massive object with everything about the app.

process.env is short for environment and is the object for all our environment variables and this is where we'll tell node,"hey youre in prod, you're in production, etc"

export HEY='hey'; node index.js

'export' creates an env variable, and then we run another command.

process.env is global and a useful prop is NODE_ENV like process.env.NODE_ENV

in node, if you have a file called 'index.js' in a directory and then you require that directory, it automatically includes that file as the root.

so instead of require('./server/server/index') you can call it like require('./server/server')

also Chai is an assertion library

supertest handles http stuff, mocha actually runs the test, giving us describe, it, and others.

take a look at this finished testing spec.js file:

var app = require('./server');
var request = require('supertest');
var expect = require('chai').expect;

describe('[LIONS]', function(){

  it('should get all lions', function(done) {
      .set('Accept', 'application/json')
      .expect('Content-Type', /json/)
      .end(function(err, resp) {

  it('should create a lion', function(done) {
        name: 'Mufasa',
        age: 100,
        pride: 'Evil lions',
      .set('Accept', 'application/json')
      .expect('Content-Type', /json/)
      .end(function(err, resp) {

  it('should delete a lion', function(done) {
        name: 'test lion',
        age: 100,
        pride: 'test lion',
      .set('Accept', 'application/json')
      .end(function(err, resp) {
        var lion = resp.body;
          .delete('/lions/' +
          .end(function(err, resp) {

  it('should update a lion', function(done) {
        name: 'test lion',
        age: 100,
        pride: 'test lion',
      .set('Accept', 'application/json')
      .end(function(err, resp) {
        var lion = resp.body;
          .put('/lions/' +
            name: 'new name'
          .end(function(err, resp) {
            expect('new name');
  • notice how we're using .send() to create a new lion, and using .end() with the resp argument. this resp argument is akin to the object sent, so our assertion of expect() uses resp.body() accordingly.

^ this was a bit tricky with creating new lion objects, deleting and updating them via resp.body and such, so its worth a couple look-overs for sure.

Application Organization our api will consist of many component to support the API: auth, static serving, etc...not just the rest endpoints.

what makes up the API really?? AN API is really just a collection of resources (lions, tigers) with models to define how those resources look, and controllers to access those resources and routes to let the controller to know how to run and to expose our api.

route -> controller --> model (client side) (back end)

a model is just a blueprint of what our resource is going to look like....good way of explaining it.

MVC is a classic organization pattern, but we'll take a service-oriented approach to organizing our code, grouping by functionality rather than type,

so not like ,"all controllers, go here, all models go here" but "everything babout the tiger goes here, everything about the lion goes here."


we can use process.env.NODE_ENV variable to tell our api what environment its running in, being either testing, development, or production or whatever.

depending on the environment, we can change things in our app like DB urls (like say going from dev to prod) or toggle console.logging. we can also set and reference other env variables, so instead of search everywhere for those values to change, we'll have a scentral location for that config.

depending on the env, we can require another config file and merge the two together so our app can use it.

why not MVC? -service oriented favors react-style components and node is actually (supposedly) pushing towards a service-oriented approach.

think of the config object as the 'base config' and depending on the env, we'll load in a diff. config object that will override theres gonna be config.js for starters, but also testing.js, development.js, production.js, etc.

_.assign() --> putting objects onto other objects via lodash.

module.exports = _.assign(config, envConfig || {});

example of config usefulness:

logger.log('listening on http://localhost:' + config.port);

Example Six

look at server.js:


^ this is actually just a fn, just attaches a bunch of middleware to whatever app you give it...its the same as putting in:

module.exports = function(app) {
  app.use(bodyParser.urlencoded({ extended: true }));

but we've included it in another file because usually, you'll have more than 3 pieces of middleware so you'll want to put it in its own file.

once again, this is only saying,"give me an app, and I'll put this middleware on it".


var api = require('./api/api');

app.use('/api', api);

^ just use api.js on hitting /api.


// deafult config object for our api
var config = {
  /* just placing the name of our possible NODE_ENV values for later*/
  dev: 'development',
  test: 'testing',
  prod: 'production',
  port: process.env.PORT || 3000

// check to see if the NODE_ENV was set, if not, the set it to dev
process.env.NODE_ENV = process.env.NODE_ENV ||;

^ process.env.NODE_ENV so if we define something in heroku or AWS it'll take that NODE_ENV, else it will do 'dev'.


its interesting how you can turn stuff on/off depending on the environment variables...this allows us to greatly reduce any issues...I need to think like this much, much more.

Mongo Introduction

mongo = a NoSql document store (a non-relational database), its a basket we can throw stuff in basically. we dont have to model our data and can just throw json in it and ask for it later. its simple to setup and use, maybe not best for all cases but for this course its great.

note: had to troubleshoot issue of using mongodb via command line, was getting shutdown (100) errors... heres the fix:

next, we can interact with mongo via a mongo shell via creating a new terminal window and typing mongo... now we can interact with the mongo shell EX: show dbs

we'll create a collection...a collection is like a table, a collection of people, dogs, todos...its a group of data that is same-ish.

lets make a DB via mongo shell: use puppies : creates a DB called puppies and switches to it

now we'll create a collection via mongo shell: db.createCollection('toys')

now we'll insert a document into the collection...a document is an instance of a collection, its the resources we'll be using:{name: 'yoyos', color: 'red'})

now lets query the collection to see what we got in there:

other things to know: by saying use along with the name of the DB you want, mongo will switch to it. if it doesnt exist, it'll create it, then switch to it.

take a look at the actual terminal code:

> use puppies
switched to db puppies
> show collections
> show dbs
admin  0.000GB
local  0.000GB
> db.createCollection('toys')
{ "ok" : 1 }
> show collections
>{name: 'yoyos', color: 'red'})
WriteResult({ "nInserted" : 1 })
{ "_id" : ObjectId("5a0b1990e6a3255672af695c"), "name" : "yoyos", "color" : "red" }
Using Mongo With Node

we'll use mongoose, a library allowing us to talk to the DB easily.

mongoose will abstract things away, add support for things like promises, allows us to model our data with schemas, because mongoDB is schemaless, and lets us add schemas to it like we typically would in a relational database.

It also allows us to establish relationships with our models as well. Mongoose does this, mongo does not. remember!

using mongoose:

var mongoose = require('mongoose');

// connecting to a database with mongoose is EZ
// use the mongodb protocol and whatever name you want
// if it does not find that database it will create it.

note: take a look at this line:

var Todo = mongoose.model('todo', TodoSchema);

'todo' is the name of the collection we wish to make, and 'todoSchema' refers to its schema that it will need to matched against.

also interesting, mongoose will make all collection names lowercase and plural...hmmm....


we can use schemas in mongoose to determine how our data will look, adding structure and validations to our data. REMEMBER, mongo is schema-less, not mongoose.

we also need some sort of relationships with our data...users creating posts and posts having categories, and belonging to users...

users can create posts = 1 to many relationship category can belong to many posts = many to many relationship

you already know a good bit about schemas via the wes bos course, but perhaps you could refactor your schema to be simpler given what you see in this example:

var mongoose = require('mongoose');

// first make a new schema for our model
// this is just the blueprint and how we tell mongoose to
// handle our data. Mongo doesn't care.
var TodoSchema = new mongoose.Schema({
  // we define a property for this model object
  // then what type it is, in this case, the
  // completed property will has to be a Boolean
  completed: Boolean

  // we can add validations too
  // just use an object literal here instead
  // just be sure to have a type property on that object
  // to tell mongoose what type this property will be
  content: {
    type: String, // will be a string
    required: true, // will not create a todo without this content property
// no matter what we pass in as a name for the model,
// mongoose will lowercase it and pluralize it for the collection.
// so below the name for the model is 'Todo', mongoose will
// convert that to 'todos' in the databse.
// TodoModel is the model we'll use in node to CRUD so
// it makes sense to export this;
var TodoModel = mongoose.model('Todo', TodoSchema);
module.export = TodoModel;
Schema Types

other types we can use on the schema:

  • unique: true ...cant be the same can DEFINITELY use this.
  • Date
  • Buffer = a buffer is a type...look into this

if we want to associate 2 models, we can just associate the ID...mongoose.Schema.Types.ObjectId

look at this property on our schema:

  // we have a relationship here. A dog belongs to an owner.
  // we can store the owners id here on the owner field. ObjectId are
  // ids in mongoose. the ref key is telling mongoose what model that
  // id belongs too. Helpful for when we ask the dog who its owner is
  owner: {
    type: mongoose.Schema.Types.ObjectId,
    ref: 'owner',
    required: true

The ID is referencing some owner somewhere...the reason we're creating this is to use mongo populations, populations are like creating a join at call time...there is no 'join table' but when we want to see who the owner is, we can join the 'owner' table...we'll talk about this more in the bit.

to recap:

model - the JS representation used in the API to access the document in the db document - the instance of the model in the database thats part of the collection collection - a group of documents schema - the actual data blueprint "this is how I want my data to look"

objectid...actually an object but looks like a string....this is _id in mongo...YOU KNOW THIS!!!

  • NOTE: objectIDs are actually indexed by default, so its easier to find something by its ID, rather than "find me a puppy with long hair"
Blog Schema representation

interesting: if you refactor your schema, and you already have data from your previous schema, you dont have to refactor it. thats nice!

Exercise 8 note: the keyword 'type' within an object denotes it as a value rather than as an object: ex:
  username: {
    type: String,
    unique: true,
    required: true

  address: {
    state: String

^ second one is an object, the first is a value, and this is denoted by 'type: String'

var PostSchema = new Schema({
  title: {
    type: String,
    required: true,
    unique: true
  text: {
    type: String,
    required: true
  author: {
    type: Schema.Types.ObjectId,
    ref: 'user',
    required: true
  categories: [{
    type: Schema.Types.ObjectId,
    ref: 'category'

^ interesting is that 'ref' is a reference to a collection name.

right now, the posts schema knows about the user/author who wrote it, but the user knows about the post that it created. so how would

how would you find out all the posts of a given user? ^ you would query the posts to find the post of a given user...go to /posts, then grab all the posts from a specific user. so we dont necessarily need the userSchema to know this at all. hmmmm.

if we did this in our user schema:

posts: [
  {ref: 'posts', type: Schema.Types.ObjectId}

Now if we want to populate all the posts for one user as above, we'd, have to poll the entire DB and go through every post, find out which ones are the users and send them that way. If instead we put the reference on the 'posts schema' instead, because we're only polling one user, not many things.

so the idea is here we have a 'one to many' relationship and when we have that, we want the reference to be in the 'many' section, in order to do a faster, better query.

Querying Data with Mongoose

mongo has a ton of query options...too many to list. we have a few ways to ask to mongo about what data we want.

Model.find({}, function(err, documents) {
  if (err) {
  } else {


Post.find({tile: 'whatever'}, function(err doc) {
  //callback func here

^ find the post titled 'whatever' then do something on callback.

  • find() method always returns an array.

using just Model.find() without something inside returns everything...


another useful method: findByID():

Model.findById('57490284328430', function(err, doc) {
  if (err) {
  } else {

^ we'll either return the document, an error, or nothing.


var dog = new Dog({
  name: 'Bingo
});, savedDog) {
  } else {

^ takes a node-style callback where next() is called to set off the error so something actually happens if it errors out..also this is kinda a bad part of the course but the instructor had these wrapped within a express function elsewhere, so thats how next() is able to be called.

you can also create a document like:

Dog.create({name: 'Bingo'}, function(err,savedDog) {


and the final way is like:

var dog = new Dog(); 'Bingo';, savedDog) {


update a document:



Model.findByIdAndUpdate('28393928392', {name: 'new name'}, function(err, updatedDoc){
  if (err) {
  } else {

^ whatever this id is will be updated as an object with name: 'new name'

Along with performing methods on the model, we can use the actual document to do things:

Model.findOne({name: 'Jan'}, function(err, doc) {
  doc.remove(function(err, removedDoc) {
    // just deleted the document from the DB

doing operations on the actual documents are possible because the actual documents returns from mongo not just objects but instances of a collection and have specific properties and methods that allow us to operate on it and have it write back to the DB.

^ wat?

so I guess if you query the database once, you have the instance of a collection, not just the actual document itself, so perhaps you dont have to do multiple queries to the can shorten these!!

seems strange, but look into because had I had that info in mind previous, I prob wouldve written my last project differently.


Mongo is a noSql database so we dont have join table, but we need a way of seeing relational data. The solution for this is called population, which you can think of as 'a join at call time'.

lets take a look at an example:

var DogSchema = new mongoose.Schema({
  owner: {
    type: mongoose.Schema.Types.ObjectId,
    ref: 'person'
  name: String

var PersonSchema = new mongoose.Schema({
  name: String

var Dog = mongoose.model('dog', DogSchema);
var Person = mongoose.model('person', PersonSchema);

// find all dogs and populate their owners
// this will grab the ids on the owners field
// and go to the ref, which is the person model
// and grab the person doc with the matching id
// and place the object on the owners field
var promise = Dog.find({})

  promise.then(function(dogs) {

  }, function(err) {


so Dog.find({}).populate('owner') will execute at runtime, find the dogs and match them to their owners to associate the two, essentially 'populating the 'owner' field with the correct names. Take another look:

  owner: {
    type: mongoose.Schema.Types.ObjectId,
    ref: 'person'

^ this 'ref' property references the collection name to find something in, and the 'type' property references how we should find it. After the function has ran, the populate() method will turn 'owner' into:

  owner: {
    name: 'frank'
Exercise 9 Solution

the router method 'params' will look

'params' fn looks at the id or whatever parameter, finds the correct thing, then attaches it to the request:

exports.params = function(req, res, next, id) {
  // use the id and attach the category to req
    .then(function(category) {
      if (!category) {
        next(new Error('No category with that id'));
      } else {
        req.category = category;
    }, function(err) {

^ this params is like a middleware for our routes!!! it fires before anything else..we'll see this when we hook up our routes.

after attaching the id to the request, then it calls next(), which goes to the next fn in the in our case it would either be 'getOne()' or 'put()'

EX: lets say it goes to getOne(), we already have the category id and have attached it to the request object via req.category, so now all we have to do is send it back via res.json()

also!! just reminder: res.json() sends a response back to the client side as json...this includes 'null' or 'undefined' values.

remember, this params fn is middleware!! (categoryController.js)

interesting thing is that we're querying the DB with the middleware params() fn, and then typically returning stuff to the DB, we're only querying the DB once and thats SOMETHING TO REMEMBER.


exports.get = function(req, res, next) {
  // need to populate here
    .populate('author categories')
    .exec()   //we call exec because populate() does not return a promise!!!
    }, function(err) {

remember populate does this:

  • give it the fields you want to populate -it'll go to the respective fields on this document and grab their objectIDs, then go to whatever the 'ref' property says, go to that collection with the objectID, find the value and attach it to that field.

REMEMBER: a populate() function hydrates the model's relationships at calltime

Creating Promises promises = 'notify me when this comes back'

lets refactor a callback func into a promise:

var action = function (cb) {
  setTimeout(function() {
    cb('hey', cb);
  }, 5000);

action(function(arg) {


var action = function() {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
    }, 5000);

  .then(function(word) {

you could also do it like:

var action = function() {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
    }, 5000);

var promise = action();

promise.then(function(word) {

theres also the reject argument:

var action = function() {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      reject(new Error('noooooooo'));
    }, 5000);

var promise = action();

.then() gives us the resolve property, and .catch() gives us the error:

  .then(function(word) { // this may or may not happen based on the promise results

  .catch(function(err) { // this may or may not happen based on the promise results


theres also a promise method called .finally() which does something (you guessed it) on done...

Nested Promises
  .then(function(file) {
    return 'hey';
  .then(function(word) {
    console.log(word) // this will console.log('hey')

^ notice how the return value is being carried down here via the argument 'word'?

you can also return another word inside that .then() to put in another .then():

  .then(function(file) {
    return 'hey';
  .then(function(word) {
    console.log(word) // this will console.log('hey')
    return 'ya boy'
  .then(function(newWord) {
    console.log(newWord) // this will console.log('ya boy')

so we can return promises within promises...

.catch(function() {


^ this is a pattern we set a lot. whatever logFile() resolves will be fed to sendEmail(), and when sendEmail() resolves it will be fed to callHome().

.catch() will run if any of the above functions error out.

if you'd like more fine-grained control over your errors, you can write an error call back inside the .then() method:

.then(logFile, function(err) { // this fn will run if theres an error

.catch(function() {


promise.all() = this will resolve an array of promises and wait till theyre all done before moving on.

In the context of DB queries, this is helpful because we can do stuff in paralell, so if theres DB queries that dont rely on other queries, we can run them in paralell and we they come back we can merge them and send it.


var readAllFiles = function() {
  var promises = [readFile(),readFile(),readFile()]
  return Promise.all(promises);

  .then(function(files) {
JSON Web Tokens

Theres many ways to protect our API...a typical way is to have a basic auth with a header and a password...also another way is to use cookies for authentication and identification..we have cookies and a session store.

a session store is a key:value pair of who's logged in right now, so when we make a request, those cookies already sit on the header, we can then deserialize those cookies, match it against something in the session store to see who it is and then give the correct person the info.

but in mobile dev, we dont deal with thats a :/

today we'll look at JSON Web Tokens, which is a heavily used open standard.

because we're using a token approach, we dont need to keep track of whos signed in with a session store OR have any cookies.

this is weird, we want the server to be the source of truth, NOT the client.

A JSON web token will be sent on every request because REST is stateless and therefore we dont know about the previous request.

The token has to be stored ON THE CLIENT that is requesting resources. ^ so people often use localStorage to store on the client

the server doesnt care where its stored, but if you want something from the server, ya gotta give that token back. thats all its concerned about,"ya want my resources, gimme the token first"

Using JSON Web Tokens

Take a look at this:

var user = {_id: '2873273237328378273'};

// send token back to client on signup/signin
var token = jwt.sign(user, 'shhhh, its a secret'); //<-- this 'shhh' is a secret string, we dont know what it is so we're using a placeholder for example

// later on an incoming request, we will decode the token
// to see who the user is. The token is probably on the
// authorization header. This will throw an error if the token
// is not a valid JWT and instead is a random string.
var user = jwt.veryify(req.headers.authorization, 'shhhh, its a secret');

// proceed to look user up to see if they exist in our system
User.findById(user._id, function(){});

^ so first we create a 'user' object, with a mongo id, then we sent it back to the client via jwt.sign(user, 'someLongString');

later when the user wants to do something like say, edit a profile, he/she'll send a request the server and the server will process it like: var user = jwt.veryify(req.headers.authorization, 'shhhh, its a secret');

^ this will grab the token from localStorage, pass it in, and then the serverside takes the token and transforms it back into the 'user' object to then use to find the specific user like this:

User.findById(user._id, function(){});

^ we do this check to make sure the person is a real user, then we can do other things. Interesting: someone could create their own secret and push it into your app and make it pass jwt.verify(), but since the DB doesnt have a user id in the DB, then they cant get into an account.

look at this one more time: var user = jwt.veryify(req.headers.authorization, 'shhhh, its a secret'); ^ jwt is expecting the token to be on the req.headers.authorization property.

How do you expire a token? its possible, depends on the library you're our case, we're using Json Web Tokens so you can just pass in an object like "expires: '5seconds'"...we'll get into this in a bit.

JWT is NOT INCLUDED in node, its 3rd party.

Usernames and Passwords

So the process goes as follows:

  1. A user signs up to access protected resources on our api, (username && password).
  2. On success, we create a new user in our DB. We use the new user's id to create a JWT.
  3. We then send that JWT back to the user on the response of signup so that they can save it and send it back on every request to a protected resource.

*** So not only do we get authentication, we also get identification with a JWT because we can reverse the token to be its original object and get the user id.

we can also do authorization as well as identification, so we can let certain users do certain things and restrict access as needed.

we need not only a username, but also a password...if you store someones password in plain text on a database, you're in deep trouble. it needs to be transformed into a hash.

unlike encryption of a json web token, we cant undo the hash of a password. hashes are one-way.

  • In order to check to see if a given password matches a saved hashed password, like on signin, we just hash the given password and see if it matches the saved hashed one. *

^ so password checks are done by taking a given password, hashing it, and see if it matches the saved user's hashed password.

authentication with middleware

lots of ways to do auth with express..

mongoose allows us to add custom functions to our models, and this makes sense because the models are the things that are going to save the instead of having node or express know about hashing passwords, we can just tell the model,"hey you know how to hash a password, hash yourself, or check yourself" vs. "hey node check this password on this model"...just "model, check yourself".

to teach mongoose new tricks theres a couple ways:

DogSchema.methods.bark = function() {

^ here we attach methods on the can think of this similar to defining functions on a JS prototype.

then we have the inverse of that, where instead of attaching functions on the model instance (i.e. a document), we're defining it on the class/constructor itself. take a look at these two examples below:

// you can think of 'methods' as prototypes. Any method we define here will be available on instances of Dog.
DogSchema.methods.bark = function(){
          // this === the dog document

// You can think of statics as static methods on a class/constructor
// and they will belong to the Dog itself and not an instance of Dog.
// like how Array.isArray() is a static method of the Array class
DogSchema.statics.findByOwner = function() {


ex: if you want this one dog to bark, you would put the bark() method on one instance of a dog, but if you want to find all dogs by owner, you'd use statics in the second example.

As Express has middleware, so does Mongoose. Middleware is perfect for validating, changing, notifying, etc. we'll use middleware to hash our passwords before a user is created. Middleware will attach to life cycle events around our documents like before save, before validations, after save, and so on.

Take a look at this:'save', function(next){
  var doggy = this;
  // socket some websocket library in this case
  socket.emit('new:doggy', doggy);

// runs before a document is validated
DogSchema.pre('validate', function(next) {


^ these post() and pre() fns are life cycle events. so after something occurs on our DogSchema, we will run post()... in this case its the save after something is saved to the DB in our example, we emit something to a web socket, then run next() which kicks off to another piece of middleware in the stack.

.pre() runs a fn before the 'validate' fn.

Executing CRUD operations

if you see in your terminal, you can start executing CRUD commands after starting up your DB via mongod, and then in another terminal running npm start: create a new user: http POST localhost:3000/api/users username=person create a new blog post: http POST localhost:3000/api/posts author=5a0cc52f41ab6025fd48ad42 title='my first blog post' text='blah blah' get all users: http GET localhost:3000/api/users

^ so since we've created everything, we can now begin using it.

authentication Configuration

run git checkout step-11.

seeding a DB = populating it with collections of documents.

lots of stuff has been added for us, but really what we want to look at is the /auth folder:

  • auth.js: everything related to authentication...the middleaware, the functions, the signing, the checking, etc is gonna go through here.

lets look at the packages included: JWT - json web token, its what we use to sign and verify our json web Tokens expressJWt - a wrapper around JWT, all it does is verify your tokens for you. this package works nicely with express middleware (built by same people as jwt)

checkToken - this returns a middleware fn for us, we pass in the secret and it returns a middleware fn that determines whether or not the token is valid or not ^ not sure about this one

user - our user model for auth purposes.

exercise 11

decodeToken() - this is to decode the token, take the token, with the secret, and turn it back to what it was initially. if not, throw an error and pass it back.

exports.decodeToken = function() {
  return function(req, res, next) {
    // make it optional to place token on query string
    // if it is, place it on the headers where it should be
    // so checkToken can see it. See follow the 'Bearer 034930493' format
    // so checkToken can see it and decode it
    if (req.query && req.query.hasOwnProperty('access_token')) {
      req.headers.authorization = 'Bearer ' + req.query.access_token;

    //^ so if a req.query exists and it has the property 'access_token', lets grab the token off it and attach it to req.headers.authorization.
    //but why put 'Bearer ' before it? This string is being looked for 'Bearer dflk;jdfgkl;'...this is a way of namespacing that this is the one we want to attach.
    // then....checkToken()...

    // this will call next if token is valid
    // and send error if its not. It will attached
    // the decoded token to req.user
    checkToken(req, res, next);

    //^ if successful, checkToken will attach the DECODED token to req.user...worth saying again.

-really, really going to have to review this section. this is hard.

checkToken() - looks for the token on req.header.authorization, then it has the secret in a closure (up top), then its going to try to match (jwt.verify()) the token with the secret and if it finds it, it'll grab whatever object is returned and then attach it to 'User' and then run next(). if it doesnt find it, will call next() with an error.

verifyUser() - does this user exist in the db and is their password the same as the one they signed up with, then attach the result to res.query.

User.findOne(), -- we use the mongo method findOne() so find the user via username, if no user it sends a 401 response, else if there is a user but they didnt provide the right password, send a 401 error with 'wrong password' string. If everything is ok, we attach 'user' to req.user and call next() to go to the next piece of middleware. then there's also error handling.

now take a look at auth/controller.js:

exports.signin = function(req, res, next) {
  // req.user will be there from the middleware
  // verify user. Then we can just create a token
  // and send it back for the client to consume
  var token = signToken(req.user._id);
  res.json({token: token});

^ this hits after all this middleware, see? in auth/routes.js:'/signin', verifyUser(), controller.signin);

after user, username, and username & password are vetted to be in the system, we kick it off to controller.signin, which creates a token (via an existing method using the user's ._id prop) and sends the token back to the client.

this is the method that creates that token after an id is given....

// util method to sign tokens on signup
exports.signToken = function(id) {
  return jwt.sign(
    {_id: id}, //create an object with an _id prop because the DB calls it _id, so this is just for matching sake (not required, just for sanity's sake)
    config.secrets.jwt, //this is the secret, defined elsewhere.
    {expiresInMinutes: config.expireTime} //this is when the token expires
Testing the authentication 11

from branch step-11-fix: run this in your terminal: http POST localhost:3000/auth/signin username=katamon password=test it should return this:

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 184
Content-Type: application/json; charset=utf-8
Date: Thu, 16 Nov 2017 15:06:51 GMT
ETag: W/"b8-o2ZaUF79Z2aM327HGmK3buk2A/M"
X-Powered-By: Express

    "token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJfaWQiOiI1YTBkYTk1MjdhOWVjMDJkZGY3NzhjM2EiLCJpYXQiOjE1MTA4NDQ4MTEsImV4cCI6MTUxMTcwODgxMX0.d2u3o6RmmI4JBEbY5zQvRl4wGKe8s-7xb8S2FsZMlHM"

^ see! we're sending the token to the client because its successful.

Securing the routes

lets think about our resources (which again is our data)....which routes are sensitive? which should be protected...lets first think about /posts:




^ we should probably protect the ability to delete a post. also the ability to update a post so no one puts anything funny there, and probably also the ability to post something too. everyone should be able to get a post or all posts.

since our route pattern is the same in /category and /user, we could say the same for all.

Understanding CORS

(in branch step-12-fix)

CORS - cross origin resource sharing

by default, if im on localhost:4500 and im trying to access a route on localhost:3000, the browser wont allow me to do that, its a security concern. there's ways around that like JSONP or just have your server enable CORS...

when chrome does a req to your server, it actually does 2 requests for 1 request.

it does whats called a 'pre-flight check'...

besides GET/PUT/POST/DELETE, theres another http verb called OPTIONS and this request is what the browser sends to the server like,"hey am i allowed to make a request to you?" and the server responds back yes or could senda 200 response, or check and see if your url is on the list, and if so, 200 resp or throw an error.

So enabling CORS is setting up the pre-flight check and setting up the appropriate headers.

Exercise 12

we're going to use auth.js method getFreshUser and decodeToken to lock down a route.

if you put decodeToken in front of any route or any middleware middlewhere can go, it will try to find the token, if it doesnt find it, it'll error out. next if you use getFreshUser after, it grabs the token that decodeToken just made, and uses it to find an actual user, and either attaches the user to the request object (req.user) and fires next() or sends an error out with next(err)

^ if you put those 2 together, you'll have your token decoded, and then your user authenticated as well.

now look at how we're locking down our routes in postRoutes.js in step-12-fix:

var router = require('express').Router();
var logger = require('../../util/logger');
var controller = require('./postController');
var auth = require('../../auth/auth');

var authMiddleware = [auth.decodeToken(), auth.getFreshUser()];
// lock down the right routes :)
router.param('id', controller.params);

  .post( authMiddleware,

  .put(authMiddleware, controller.put)
  .delete(authMiddleware, controller.delete)

module.exports = router;

  • before deploying, make sure you have NO SECRETS in source control, use envs for secrets

  • test everything you can and DO ERROR HANDLING.

  • make sure all your dependencies have been installed. so if you have gulp globally installed, Heroku for example, doesnt know about that. So include it in your package.json.

  • you can also freeze node modules by running 'npm shrinkwrap'...this makes sure all dependencies stay the same and do not update!

  • ^ make sure all dependencies are in 'dependencies' in package.json

  • dont hard code things like ports, db urls, dev urls, etc. pass them in as env variables and use the config.js file to determine which env you're in!

ex: blah.get('localhost:3000') <-- this wont run because there is no localhost:3000 in heroku.


  1. sign in
  2. make a new app --> click create app

make a procfile

basically, I followed the directions on heroku, troubleshot some issues, and that was that.

foreman = instead of having to deploy everytime, we can use this...

foreman start = starts our app on port 5000

http://localhost:5000/api/users <-- now we can see our users and see that its running.

Configuring the deployment

we have an issue....a huge one...if we run npm start we get errors. that because we haven't included mongo and we dont have a production database url in production.js..

  1. we need to create a db object in production.js:
module.exports = {
  // disbable logging for production
  logging: false,
  db: {
    url: 'mongodb://localhost/nodeblog'

now we need to change that local url for something that lives out on the web. we can actually peruse heroku for that:

resources -> find more -> mongoLab

sign up for mongoLab and add it to your app, then we change the url:

module.exports = {
  // disbable logging for production
  logging: false,
  db: {
    url: process.env.MONGODB_URI

now lets go in heroku: project -> settings, and add our config variables:

NODE_ENV production JWT dudeman

looks good, commit the changes and run git push heroku master

^ now we have an empty array, because we have an empty db! so happy to actually say that.

the DB is hosted via mongolabs, and heroku is used for deployment

now lets use httpie to post a user to the db:

http POST username=david password=1234

and we get our token back:

    "token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJfaWQiOiI1YTBlMjE4ZmM2ZjY0OTAwMTRiNjMwMmUiLCJpYXQiOjE1MTA4NzU1MzUsImV4cCI6MTUxMTczOTUzNX0.UL3ZGUat3c05ZPxNunRDAOlQX0GqvmtX-LdcsCyG63w"

BOOM! so now its working

Foreword - check parse API for a good example of a good api