notes and code from 'Testing Javascript Applications' course πŸ“š
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.github
api-final
api
client-final
client
cypress-final
cypress
other
scripts
.all-contributorsrc
.eslintignore
.eslintrc
.gitattributes
.gitignore
.travis.yml
CONTRIBUTING.md
INSTRUCTIONS.md
LICENSE
README.md
cypress.json
package-scripts.js
package.json
yarn.lock

README.md

My Notes

Testing JS Applications: Notes

Bugs bugs we find in our software: - security bugs - business logic - ux bugs - regression - accessibility - i18n

1962, NASA's $80mil rocket exploded bc of a bug...a missing hypen in the code, in an edge case. 1999, NASA lost a 125mil mars orbiter bc of a bug: 2 teams working together, 1 used metric system, 1 used imperial system and they did not convert their units.

failures not in making the bugs, but in not identifying the issues in the system.

how do we prevent bugs?

  1. static types? Flow / Typescript - once you have types you dont have to worry if you call a fn w a number when it needs to be a string, etc.

  2. linting: ESLint

what kinds of testing can we do?

  1. unit testing - testing the inputs and outputs of a small bit of code

  2. integration testing - testing multiple unit tests together: ex: integration b/w a dropdown and a fake response with data when clicked

  3. E2E testing - no mock data, pretend you're a user and automate it where you're testing a user flow in your application ex: create an account -> logging in -> adding products to checkout -> paying for it online.

  4. the list goes on.

Where do we focus our time??? (in order of least difficult/expensive to highest)

  1. unit
  2. integration
  3. e2e

...

all a testing framework is is a n abstraction around the concept of: take an input, pass it sto something, then get the output, then verify the state of the world, if it doesnt look like you want it to, you throw an error. there's lots of abstractions around this, but generally this is what ALL testing frameworks do.

code coverage = a mechanism where we can see how much our code has been run by a certain test.

JSDOM env = JSDOM is the browser apis implemented in node...allows you to run browsery things in node.js. if we're just working strictly in node,js, though, we might not need this and it adds some performance weight as well, so we can set the 'testEnvironment' to 'jest-environment-node' and remove it. handy. :)

jest also has a 'watch' feature similar to sass watch, so you can run tests on the fly in the terminal, as you want it. you can also pinpoint specific files to test if you dont want all of them tested. super handy stuff.

writing unit tests intro
  • think about use cases for testing rather than specific lines of code. (ex: 'make sure a sandwich is returned if we give a proper input, and give null if not').
import makeMeASandwich from '../make-me-a-sandwich'

test('returns null if the sandwich does not exist', () => {
  const req = {query: {}};
  const result = makeMeASandwich(req);
  expect(result).toBeNull();
})

test('returns my sandwich', () => {
  const req = {query: {sandwich: 'Monte Cristo'}};
  const result = makeMeASandwich(req);
  expect(result).toBe('Monte Cristo');
})
principles of testing

look at 'monte cristo' above...you may want to DRY that into a variable, in testing....

test('returns my sandwich', () => {
  const sandwich = 'Monte Cristo';
  const req = {query: {sandwich: sandwich}};
  const result = makeMeASandwich(req);
  expect(result).toBe(sandwich);
})

nice.

we can also DRY out these 2 tests. but the real reason we're going to do this not for the DRY Principle sake, but to show the association between these 2 tests:

import makeMeASandwich from '../make-me-a-sandwich'

test('returns null if the sandwich does not exist', () => {
  const req = getReq();
  const result = makeMeASandwich(req);
  expect(result).toBeNull();
})

test('returns my sandwich', () => {
  const sandwich = 'Monte Cristo';
  const req = getReq(sandwich);
  const result = makeMeASandwich(req);
  expect(result).toBe('Monte Cristo');
})

function getReq(sandwich) {
  return { query: {sandwich}};
}
Assertions

JEST has a ton of assertions available, like expect and toBe, toMatchObject(this object has these properties and values) and so forth.

hmmm...scientifically proven that you will learn and hold on to your information if you elaborate on what you just learned.

Introducing TDD

TDD = write the tests first, watch them fail, then write the code to make them work.

some ppl believe this 100%, some not so much. what does it actually look like?

write one test at a time!

interesting: you want to only write the code that will allow your test to run successfully, building on it with each test.

IMPORTANT: watch the workflow in 'introducing test-driven developing' to get a solid understanding of the TDD cycle of 'red -> green -> refactor'.

TDD workflow = write the test for the simplest case, write the code to make the code pass with as little as code possible, and continue doing that until you are left with all the cases accounted for.

^ gotta say, thats a pretty brilliant approach. I feel dumb for not doing this already. ;/

Jest vs. Mocha

jest paralellizes your tests = 1000 test? does as many as it can at once, makes for less time spent. also the configuration is super easy, the --watch feature seems pretty cool too. ships with an assertion library as well so you dont need to import anything else, also comes with a mocking API...we'll check this out later.

this guy really backs Jest for unit and integration testing.

Challenge 3

super useful, review this for a good example of TDD in action!

TDD tradeoffs

TDD doesnt work well when you dont know what the API is going to look like. so how do you still have the benefits of TDD even if you dont know what the api will look like?

  1. write out all the code you want to implement, and how you want the API to look. then you move that code or delete it all and begin again using TDD. you might think you'll write that code the same, but in the process of approaching it through a TDD mindset, you'll come up on cleaner, more testable code.

  2. another place where tdd is difficult = UI DEVELOPMENT!!! TDD

Finding a Bug

find where the bug exists before writing the test. find the edge case, then write the test for the edge case, then fix the code to include the edge case so the test passes.

Its odd because I havent written a lot in this course so far (2hrs down out of 5.5 total) but I feel I've learned a good bit. specifically the idea of writing your test first (which fails), then writing the most minimal code to suffice the test, then writing another test for an additional test case so the code fails that test, then writing the code so that test passes, and so forth until all possible test cases have been accounted for.

that makes sense for fresh code. In refactoring existing code as we just did in 'Challenge 4', we found a bug that existed in api/src/models/user.js...a userSchema method called toProfileJSONFor():

 UserSchema.methods.toProfileJSONFor = function(user) {
    return {
      username: this.username,
      bio: this.bio,
      // this is where the bug is...
      // we're not adding this.image
      // to the object!
      following: user ? user.isFollowing(this._id) : false,
    }
  }

the problem as we could see in our UI was that we werent getting avatars for each article. we tracked down the bug to the actual server-side code where the schema method toProfileJSONFor() was returning everything we needed except for an image.

Now typically, I'd just add the image like image: this.image and call it a day. HOWEVER, if we create a test for this method, if someone comes through and removes it, it will throw a problem. In this way, we are not just improving the code to make it behave as expected but we are also enforcing rules to this code to stay a certain way, and if not, our to-be test will fail. so, lets write the test (in api/src/models/_tests_/user.js):

test('toProfileJSONFor creates a new user', () => {
  const userOverrides = {
    username: 'david',
    bio: 'long bio',
    image: 'http://example.com/avatar.png',
    following: false
  };

  const user = generateUser(userOverrides);
  const result = user.toProfileJSONFor();
  expect(result).toMatchObject(userOverrides);
});

^ we create a new user using the generateUser() fn and an object of mock data using the toProfileJSONFor() return'ed data object schema. we then create a variable called 'result' which uses toProfileJSONFor() on our newly generated user object. we then write a test (via JEST) to expect result to be the same as the initial object. it FAILS because we have to implement image in our returned object in api/src/models/user.js so lets do that:

 UserSchema.methods.toProfileJSONFor = function(user) {
    return {
      username: this.username,
      bio: this.bio,
      image: this.image, //added this!
      following: user ? user.isFollowing(this._id) : false,
    }
  }

now our tests pass and so we've not only written our code correctly, but strengthened it from further problems with a test. ✨

Integration Testing

unit tests are for specific modules, usually you dont touch much outside of the module. they're small, fast, and its easy to see whats broken when it fails.

int. tests, for an api for example, have you run an entire server, and you'll have services an API needs. for our test we'll fire up our mongoDB server and so forth. thats totally cool because one of the things we're testing is the integration between our api and the server.

start-server.js: we use faker api for fake data for users, we authenticate, then its the code to start the server.

Setting up the server

we're now going to be testing out the /users route so we can guarantee that we can get back users that the server has.

it appears JEST has life cycle hooks of its own like beforeAll(), beforeEach(), afterAll()...

using fake data

important: by exporting a fn that does stuff for us (startServer) it makes things easier for us....your modules should never do anything at the root of a module, they should always export a fn that you call to do stuff...it makes things way easier to test.

we'll use axios to interact w our server....

test('can get users', () => { //remember, this is JEST!!
 return axios.get('http://localhost:3001/api/users')
   .then (response => {
     console.log(response.data.users);
   });
});

^ testing frameworks handle promises by using the return keyword

benefits of random data: helps us understand that what it is we are testing is scoped to the specific thing, plus random data has a way of revealing bugs in your code that you didnt know about. this is a bummer, but you want this to happen because it means you werent covering a specific case. ex:

'randomly generating an article with multiple tags that were exactly the same...we didnt account for this, but our test randomly put it up there, so it amounted in a test failure because we dont want 2 of the same tags in an article.'

problem: how do we assert something to be true when its always going to be generated fake data? we can test them against the schema!

test('can get users', () => { //remember, this is JEST!!
  return axios.get('http://localhost:3001/api/users')
    .then (response => {
      // console.log(response.data.users);
      const user = response.data.users[0]; //grab a user object to test against the schema
      // console.log(user);
      expect(user).toMatchObject({
        name: expect.any(String),
        username: expect.any(String),
      })

    });
});

take a look at out 2 tests so far and see how self-explanatory they are:

test('can get users', () => { //remember, this is JEST!!
  return axios
    .get('http://localhost:3001/api/users')
    .then(response => {
      // console.log(response.data.users);
      const user = response.data.users[0]; //grab a user object to test against the schema
      // console.log(user);
      expect(user).toMatchObject({
        name: expect.any(String),
        username: expect.any(String)
      });
    });
});


test('can get 2 users at offset 3', () => { //remember, this is JEST!!
  const fiveUsersPromise = axios
    .get('http://localhost:3001/api/users?limit=5')
    .then(response => response.data.users);

  const twoUsersPromise = axios
    .get('http://localhost:3001/api/users?limit=2&offset=3')
    .then(response => response.data.users);

    return Promise
      .all([fiveUsersPromise, twoUsersPromise])
      .then(responses => {
        const fiveUsers = responses[0];
        const twoUsers = responses[1];
        console.log(fiveUsers.length);
        console.log(twoUsers.length);
      });
});

so nice how well it reads...so self-explanatory. lets add some assertions to finally test our case:

test('can get 2 users at offset 3', () => { //remember, this is JEST!!
  const fiveUsersPromise = axios
    .get('http://localhost:3001/api/users?limit=5')
    .then(response => response.data.users);

  const twoUsersPromise = axios
    .get('http://localhost:3001/api/users?limit=2&offset=3')
    .then(response => response.data.users);

    return Promise
      .all([fiveUsersPromise, twoUsersPromise])
      .then(responses => {
        const fiveUsers = responses[0];
        const twoUsers = responses[1];
        // console.log(fiveUsers.length);
        // console.log(twoUsers.length);

        const firstUser = twoUsers[0];
        const secondUser = twoUsers[1];

        const firstUserAll = fiveUsers[3];
        const secondUserAll = fiveUsers[4];

        expect(firstUser).toEqual(firstUserAll);
        expect(secondUser).toEqual(secondUserAll);
      });
});
Async Await

ex:

async function iWantToReturnAPromise() {
  return 'hi';
}

async function iCallAPromise() {
  const result = await iWantToReturnAPromise();
  console.log(result); //'hi'...if no 'await', it would simply return a Promise object.
}

async/await...when you hit an 'await' keyword in an async function, JS doesnt actually wait, it keeps going, runs all functions and so forth. and when the promise you're awaiting is resolved, JS execution starts up again and continues the rest of the function.

async/await LOOKS like nothing happens until this await resolves, but thats not true (as we just explained).

now look how we've just refactored the 'can get users' test from using promises to async/await:

test('can get users', async () => { //remember, this is JEST!!
  const response = await axios.get('http://localhost:3001/api/users');

  const user = response.data.users[0]; //grab a user object to test against the schema
  expect(user).toMatchObject({
    name: expect.any(String),
    username: expect.any(String)
  });
});

theres no 'root' async/await per module, so remember async/await happens INSIDE a function, not necessarily INSIDE the module file.

and look at how we refactor our 'can get 2 users at offset 3' test:

test('can get 2 users at offset 3', async () => { //remember, this is JEST!!
  const fiveUsers = await axios
    .get('http://localhost:3001/api/users?limit=5')
    .then(response => response.data.users);

  const twoUsers = await axios
    .get('http://localhost:3001/api/users?limit=2&offset=3')
    .then(response => response.data.users);

    const firstUser = twoUsers[0];
    const secondUser = twoUsers[1];

    const firstUserAll = fiveUsers[3];
    const secondUserAll = fiveUsers[4];

    expect(firstUser).toEqual(firstUserAll);
    expect(secondUser).toEqual(secondUserAll);
});

just a summary here...the shape of your integration tests should follow this pattern: set up the state, do the action, do the assertion, and then tear down.

important: whether youre doing unit or integration testing, the important piece is that you're interacting with the thing you're testing in the same way that regular code would interact with it. Make sure your tests actually validate what your code is doing.

sometimes you have to interact with the api twice to solidify what you have is correct (hence with the 'get 2 users' and 'get 5 users' test).

challenge 5

with unit tests is super easy to co-locate tests but with integration tests you're testing the whole system or at least a good part of it, so thats why api/test/integration exists...thats where we're putting our integration tests.

cool tip: so we dont have to keep typing out our api paths, axios comes with a handy create() method for simply inputting it in once and using it as a variable:

const api = axios.create({
  baseURL:  'http:localhost:3001/api/'
})

so for example this:

test('can get 2 users at offset 3', async () => { //remember, this is JEST!!
  const fiveUsers = await axios
    .get('http://localhost:3001/api/users?limit=5')
    .then(response => response.data.users);

  const twoUsers = await axios
    .get('http://localhost:3001/api/users?limit=2&offset=3')
    .then(response => response.data.users);

    const firstUser = twoUsers[0];
    const secondUser = twoUsers[1];

    const firstUserAll = fiveUsers[3];
    const secondUserAll = fiveUsers[4];

    expect(firstUser).toEqual(firstUserAll);
    expect(secondUser).toEqual(secondUserAll);
});

becomes this:

test('can get 2 users at offset 3', async () => { //remember, this is JEST!!
  const fiveUsers = api
    .get('users?limit=5')
    .then(response => response.data.users);

  const twoUsers = api
    .get('users?limit=2&offset=3')
    .then(response => response.data.users);

    const firstUser = twoUsers[0];
    const secondUser = twoUsers[1];

    const firstUserAll = fiveUsers[3];
    const secondUserAll = fiveUsers[4];

    expect(firstUser).toEqual(firstUserAll);
    expect(secondUser).toEqual(secondUserAll);
});

question: what do you unit test and what do you integration test?

NOTE: integration tests take more setup, especially with edge cases. the higher you go up the stack in testing, the higher the cost in building and maintaining those tests. for the author, if he's writing integration tests on something, he normally wont do unit tests for them. normally what he'll do is have e2e tests that cover 'happy path' stuff thats super important to get right, then drop it down a level to integration tests for more fine-grained things, then even more fine grained stuff with unit tests that is maybe less important or has a lot of edge cases that he wants to cover.

with testing it appears that you just learn things down the road...it appears it is all tradeoffs "am I ok if im less confident if this is always working like I have specified?"

EX: importance of 'adding a comment to a post' vs. importance of 'ability to login'...he may write e2e tests for login functionality and then unit tests for the comment component.

challenge 5 Solution

some folks think you should only have 1 assertion per test. it doesnt matter! your goal is to cover a use case and so you should do all the assertions necessary to make sure that use case is covered.

author's tests for ch.5 solution

test('can get articles with a limit', async () => {
  const limit = 4;
  const articles = await api
    .get(`articles?limit=${limit}`)
    .then(response => {
      return response.data.articles;
    });

  expect(articles).toHaveLength(limit);
});

NOTES:

  • you can get individual coverage reports for unit tests and integration tests.

my tests for ch5 solution

test('can get article', async () => {
  const response = await api.get('articles');
  const article = response.data.articles[0];

  expect(article).toMatchObject({
    slug: expect.any(String),
    title: expect.any(String),
    description: expect.any(String),
    body: expect.any(String),
    createdAt: expect.any(String),
    updatedAt: expect.any(String),
    tagList: expect.any(Array),
    favorited: expect.any(Boolean),
    favoritesCount: expect.any(Number),
    author: expect.any(Object)
  });
});

test('can get all tags and make sure they arent duplicated', async () => {
  const response = await api.get('tags');
  const tags = response.data.tags;

  const tagDuplicates = tags.forEach(query => {
    tags.filter(tag => {
      tag.indexOf(query) > -1;
    });
  });
  
  expect(Array.isArray(tags)).toEqual(true);
  expect(tagDuplicates).toBe(undefined);
});
Authentication

similar to mocha, we'll use Jest's 'describe' method to separate tests that do and dont require authentication:

describe.only('authenticated', () => {
  let cleanupUser;

  beforeAll(async () => {
    const result = await createNewUser();
    console.log(result.user); //our token resides here
    const token = result.user.token;

    api.defaults.headers.common.authorization = `Token ${token}`; // this says,"if theres no other authorization header, this will be it"
    
    cleanupUser = result.cleanup();
  });

  afterAll(async () => {
    await cleanupUser();
    api.defaults.headers.common.authorization = ''; //clean it up!
  });

  test('works', () => {
    console.log('here');
  });
});

from here on, we can make authenticated requests.

Honestly, this instructor kinda blows and I'll have to look through the actual FINISHED integration tests to get a better understanding of authenticated tests. dont really like this, but it is what it is. :/

Client Side testing
  • Question: can you use Jest for everything or specific testing frameworks for other specific frameworks?

Jest is for everything, and works well w react as its also a FB product. Jest also works with an additional library with angular 2.

  • Question: should I be using Jest with PhantomJS? none, Jest uses JSDom and will work for 99.9% or your test cases.

interesting: author is using data- attributes in html to do client-side targeting.

before we test our toggle button, lets ask ourselves: what do we want to test in this component? what are the important bits that would make us feel confident that this button always works?

  1. we want to make sure the state reflects the change when we click the button.
  • cant we just check the state? yes, but if we change the state property name we then have to change the tests too. Thats brittle and in doing this we add implementation details to our test, which we shouldnt do...lets think of something better instead of directly checking state. Perhaps we can check to see if the new styles have been applied!
  1. we want to make sure clicking the button fires off the fn correctly.
  2. we want to make sure the initial state of the button is communicated to correctly via state.
Snapshot Testing

lets test for the simplest case: lets see what happens when we dont provide any props besides whats required on toggle on children....

lets run this: npm start client.demo.unit

and add this to client/demo/unit/_tests_/toggle.js:

import React from 'react';
import {render} from 'enzyme';
import Toggle from '../toggle';

test('the component renders with defaults', () => {
  const wrapper = render(<Toggle onToggle={() => {}}>I am child</Toggle>);
  console.log(wrapper.html());
});

Snapshot testing = snapshot testing allows us to see diffs in our data as they have occurred. inside of our test, instead of using expect(something).toEqual(somethingElse) when can use expect(something).toMatchSnapshot(somethingElse)...this will run a test, create a 'snapshot' file showing the output and the diffs in the data if the test failed (similiar to a git diff).

  • this applies to client-side testing as you can snapshot react components that have been rendered or mounted.

EX:

function MyComp(){
  return <div>hello</div>;
}

test('the component renders with defaults', () => {
  // const wrapper = render(<Toggle onToggle={() => {}}>I am child</Toggle>);
  // console.log(wrapper.html());

   const wrapper = render(<MyComp />);
   expect(wrapper).toMatchSnapshot();
});

which outputs _snapshots_/toggle.js.snap:

// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`the component renders with defaults 1`] = `
<div>
  hello
</div>
`;

so if you change 'hello' to 'hell' and then run the test again, you'll see the test fails. thats totally fine, because we can update our snapshot with hat it needs to be automatically via hitting the u button in the terminal. thats super useful.

lets look at the react component we were using:

import React from 'react';
import {render} from 'enzyme';
import Toggle from '../toggle';
// import { wrap } from 'module';

// function MyComp(){
//   return <div>hell</div>;
// }

test('the component renders with defaults', () => {
  const wrapper = render(<Toggle onToggle={() => {}}>I am child</Toggle>);
  // // console.log(wrapper.html());

  //  const wrapper = render(<MyComp />);
   expect(wrapper).toMatchSnapshot();
});

our snapshot will be:

// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`the component renders with defaults 1`] = `
.css-vp4bp7,
[data-css-vp4bp7] {
  text-align: center;
  display: inline-block;
  margin-bottom: 0px;
  font-size: 14px;
  font-weight: 400;
  line-height: 1.4;
  padding: 6px 12px;
  cursor: pointer;
  border-radius: 4px;
  color: #fff;
  background-color: #337ab7;
  border-color: #285f8f;
}

<button
  class="css-vp4bp7"
  data-test="button"
>
  I am child
</button>
`;

^ the generated css comes not with Jest, but with a plugin that works with jest in jest.config.jsoncalled snapshotSerializer and if you're using glamourous you'll need to use the jest-react-glamour library...and then in our client/config/jest/setup-framework.js file, we'll setup serializer to add our css to our snapshot.

^ this is cool bc jest hooks into the snapshot process and can grab stuff thats relevant for your test (like for example, styles).

Visual regression testing
  • nice, but hard to setup. you can get pretty much the same initial guarantees as snapshot testing.
Simulate Event Testing - we'll test to make sure the thing does what we want it to do when an event occurs (ex: click the button, something happens).
Testing Routes

in an integration test, you should start at the very top, and the very top is our routes.

Testing Workshop

πŸ‘‹ hi there! My name is Kent C. Dodds! This is a workshop repo to teach you about testing JavaScript applications.

slides-badge chat-badge Build Status Dependencies MIT License All Contributors

PRs Welcome Donate Code of Conduct Watch on GitHub Star on GitHub Tweet

Sponsor

Thank You

Big thanks to the RealWorld project from GoThinkster. This project is a copy of the Node implementation and the React implementation of the RealWorld project.

Also thank you to all the contributors

Topics covered

  1. Unit Testing with Jest
  2. Integration Testing with Jest
  3. End to End (E2E) Testing with Cypress

We'll mention other forms of testing, but these are the types we'll focus on and learn in this workshop. We'll learn about the benefits (and tradeoffs) of TDD. We'll learn how to configure the tools and why, when, where, and what to test.

Project

Branches

This project has been used to teach about testing in various settings. You may want to switch to the appropriate branch for this workshop. Otherwise the code you're looking at may not be exactly the same as the code used in the setting you're working with.

  • Frontend Masters fem

System Requirements

  • git v2.10.2 or greater
  • NodeJS v6.9.5 or greater
  • yarn v0.20.3 or greater (or npm v4.2.0 or greater)
  • MongoDB v3.4.2 or greater

All of these must be available in your PATH. To verify things are set up properly, you can run this:

git --version
node --version
yarn --version
mongod --version

If you have trouble with any of these, learn more about the PATH environment variable and how to fix it here for windows or mac/linux.

Setup

After you've made sure to have the correct things (and versions) installed, you should be able to just run a few commands to get set up:

git clone https://github.com/kentcdodds/testing-workshop.git
cd testing-workshop
npm run setup --silent
node ./scripts/autofill-feedback-email.js YOUR@EMAIL.com
git commit -am "ready to go"

Change YOUR@EMAIL.com to your actual email address

This may take a few minutes. If you get any errors, run git reset origin/master --hard to make sure you have a clean project again, then please read the error output and see whether there's any instructions to fix things and try again. If you're still getting errors or need any help at all, then please file an issue.

Note: You might see this:

Cypress Network issue

I'm not sure how to prevent this from happening (suggestions appreciated!) but it happens every time you run the e2e tests. Just do nothing or hit Allow to make it go away (super annoying). Sorry about that 😞

Cypress

If you're a windows user, please see the next section...

For everyone else, you'll want to come with Cypress.io downloaded, installed and have an account ready to go. Please follow these instructions to do this!

Windows users!!

Unfortunately, the cypress application does not yet support the Windows platform. (Bug them about it here). You should still be able to run cypress in "headless" mode, but you will be unable to open the application for development.

To get around this issue, you'll have to run the E2E portion of the workshop on Linux or Mac. I recommend either installing and booting your machine in Linux, or running a Linux Virtual Machine on your Windows computer.

Alternatively, you could just forego the application bit and mostly observe that portion of the workshop. If you're doing this with a group, perhaps you could pair with someone who has a Mac or Linux machine.

Running the app

To get the app up and running (and really see if it worked), run:

npm start dev

# if using yarn
yarn start dev

This should start mongod, the api server, and the client server all at the same time. Your browser should open up automatically to http://localhost:8080 (if it doesn't, just open that yourself) and you should be able to start messing around with the app.

Here's what you should be looking at:

Conduit Screenshot

If this fails at any point for you, please first see Troubleshooting and if you still can't get it working, make an issue.

Login

If you want to login, there's a user you can use:

  • Email: joe@example.com
  • Password: joe

To stop all the servers, hit Ctrl + C.

Protip: we're using nps in this project. If you want to type less, then you can install nps globally: yarn global add nps (or npm i -g nps) and then you can run nps instead of npm start

Troubleshooting

"npm run setup" command not working

Here's what the setup script does. If it fails, try doing each of these things individually yourself:

# verify your environment will work with the project
node ./scripts/verify

# install dependencies in the root of the repo
yarn

# install dependencies in the api directory
cd api
yarn

# install dependencies in the client directory
cd ../client
yarn

# get back to the root of the repo
cd ..

# load the database with fake data
node ./scripts/load-database

# verify the project is ready to run
npm start lint
npm start split.api.verify
npm start split.client.verify
npm start split.e2e.verify

If any of those scripts fail, feel free to file an issue with the output from that script. I will try to help if I can.

In addition, during some of these steps, some files get temporarily changed and if they fail, then you may have changed but not cleaned up. So when everything's finished. Run:

git reset --hard HEAD

Just to make sure nothing's left over.

"npm start dev" command not working

If it doesn't work for you, you can start each of these individually yourself:

npm start dev.mongo
npm start dev.api
npm start dev.client
"verify.js" is saying something's wrong with mongo

The mongod binary needs to be available in your path for you to run mongod from the command line (which is what this project's scripts does for you). Learn how to do this on windows or on mac.

Note: you'll need to open a new terminal/command prompt window after you've done this.

Structure

This project has a bit of a unique setup. Normally you'll have just a single package.json at the root of your repository, but to simplify setup I've included both the api and client projects in a single repository. The root of the project has a package.json as does api, and client. Most of our time working on tooling and running tests will be in one of these sub-directories (with the exception of the E2E tests).

LICENSE

The original projects are licensed as noted in their respective package.json files. The rest of this project is MIT licensed.

Contributors

Thanks goes to these wonderful people (emoji key):


Thinkster

πŸ’»

Kent C. Dodds

πŸ’» πŸ“– πŸš‡ ⚠️

Callum Mellor-Reed

πŸ› πŸ’»

Eric McCormick

πŸ› πŸ’»

Paul Falgout

πŸ’» πŸ“–

Brett Caudill

πŸ’» πŸ“–

Jennifer Mann

πŸ›

Brian Mann

πŸ›

Francisco Ramini

πŸ“–

Romario

πŸ“–

This project follows the all-contributors specification. Contributions of any kind welcome!