Permalink
Browse files

Fixed broken links and typos (#194)

* chore(links): fixed miscellaneous broken links

* chore(links): updated links to Colophon schema

* chore(typos): fixed miscellaneous typos
  • Loading branch information...
thomasgohard authored and ahmadnassri committed Nov 5, 2018
1 parent 71bc1ce commit f8a6b75f4d1a38a09a18d25ed8fb5b66ac83c66b
@@ -19,7 +19,7 @@ As an API implementor, use a module like [node-error][node-error] to decorate yo
## References
- [RFC7807: Problem Details for HTTP APIs][rfc7807]
- [node-error: Ahmad's implementation of RFC7807](node-error)
- [node-error: Ahmad's implementation of RFC7807][node-error]
[rfc7807]: https://tools.ietf.org/html/rfc7807
[node-error]: https://github.com/ahmadnassri/node-error
@@ -44,4 +44,4 @@
### Guidelines
- [Accessibility](accessibility.md)
- [Supported Browers](supported-browsers.md)
- [Supported Browsers](supported-browsers.md)
@@ -13,7 +13,7 @@ The `README` file is a typical place where this information can be found. This w
## What
Use a `colophon.yml` file in your repository that follows the [Colophon schema](https://github.com/ahmadnassri/colophon).
Use a `colophon.yml` file in your repository that follows the [Colophon schema](https://github.com/project-colophon/schema).
## How
@@ -49,12 +49,12 @@ references:
The `id` can be used to link this GitHub repo to the project via other mediums, such as by placing the `id` in an HTML `meta` tag, or the HTTP header response of an API.
See the [spec](https://github.com/ahmadnassri/colophon/tree/master/spec/1.0.0) for other fields that may be appropriate for your project.
See the [spec](https://github.com/project-colophon/schema/tree/master/schema/1.0) for other fields that may be appropriate for your project.
## Who
Everyone!
## References
- [Colophon schema](https://github.com/ahmadnassri/colophon)
- [Colophon schema](https://github.com/project-colophon/schema)
@@ -99,7 +99,7 @@ Any teams deploying to www.telus.com:
- [Inbound proxies](../delivery/inbound-proxies.md)
- [inbound.telus-gateway-staging-config][telus-gateway-staging-config]
- [inbound.telus-gateway-production-config][telus-gateway-production-config]
- [Adobe Target](//marketing.adobe.com/resources/help/en_US/target/target/c_spa-visual-experience-composer.html)
- [Adobe Target](https://marketing.adobe.com/resources/help/en_US/target/target/c_spa-visual-experience-composer.html)
[rfc-6570]: https://tools.ietf.org/html/rfc6570 "RFC 6570"
@@ -20,7 +20,7 @@ This includes:
## How
We created a separate Github organization: [`telusdigital-archive`][archive] that will host all archived repositories, leveraging the ["transfer ownership"][transfer-docs] feature in Github.
We created a separate Github organization: [`telus-archive`][archive] that will host all archived repositories, leveraging the ["transfer ownership"][transfer-docs] feature in Github.
Once repo is transferred, the following actions need to be taken:
@@ -32,5 +32,5 @@ Once repo is transferred, the following actions need to be taken:
- disable GitHub Pages _(if enabled)_
[eol]: https://en.wikipedia.org/wiki/End-of-life_(product)
[archive]: https://github.com/telusdigital-archive
[archive]: https://github.com/telus-archive
[transfer-docs]: https://help.github.com/articles/transferring-a-repository-owned-by-your-organization/
@@ -2,7 +2,7 @@
## Why
Small [user stories](process/user-stories.md) are beautiful, as they flow through the system more quickly. This gives us faster feedback, which means that its easier to find and fix bugs. It also means that each increment has less risk in it, as less code is changed.
Small [user stories](user-stories.md) are beautiful, as they flow through the system more quickly. This gives us faster feedback, which means that its easier to find and fix bugs. It also means that each increment has less risk in it, as less code is changed.
### Faster feedback compound benefits
@@ -14,7 +14,7 @@ One popular approach is to aim for stories that take one day to kick off, develo
## How
The [INVEST](process/user-stories.md) principles around what a good story looks like are still important - each story should be independent, negotiable, valuable, small, and testable.
The [INVEST](user-stories.md) principles around what a good story looks like are still important - each story should be independent, negotiable, valuable, small, and testable.
Given these constraints, how do you make stories smaller? Here's one example<sup>[1](#footnote1)</sup>:
@@ -47,7 +47,7 @@ Bill Wake's [article on the INVEST mnemonic](http://xp123.com/articles/invest-in
> - **Small**: stories should be built in a small amount of time, usually a matter of person-days. Certainly you should be able to build several stories within one iteration.
> - **Testable**: you should be able to write tests to verify the software for this story works correctly.
For more on the importance and practical advice on how to achieve small stories, check out the article on [smaller stories](process/small-stories-are-faster.md).
For more on the importance and practical advice on how to achieve small stories, check out the article on [smaller stories](small-stories-are-faster.md).
Once a story has stabilized, and before its picked up by developers, it's common practice to include acceptance criteria in order to help a developer know when they're finished. QA folks often go by the acceptance criteria when validating a story and developing a test strategy. When present, acceptance criteria often follow a "Given, When, Then" pattern. Martin Fowler's article on the [Given When Then](https://martinfowler.com/bliki/GivenWhenThen.html) pattern describes it well:
@@ -75,14 +75,14 @@ However, you should invest in these tests, **because** users’ affection and tr
- [Saucelabs][saucelabs]
[unit-tests]: functional/unit.md
[contract-tests]: functional/consumer_driven_contracts.md
[contract-tests]: functional/consumer-driven-contracts.md
[e2e-functional]: functional/e2e.md
[e2e-ui]:functional/visual-regression.md
[seo]: nonfunctional/seo.md
[security]: nonfunctional/security.md#
[performance]: nonfunctional/performance.md
[load]: nonfunctional/load.md
[acessibility]: nonfunctional/accessibility.md
[accessibility]: nonfunctional/accessibility.md
[analytics]: nonfunctional/analytics.md
[functional-testing]: https://en.wikipedia.org/wiki/Functional_testing
@@ -2,21 +2,21 @@
## Why
When our application gets deployed through our [Continuous Delivery](../process/continuous-delivery.md) pipeline, we want to know that an application is working when it is deployed to our pre-production and production environments.
When our application gets deployed through our [Continuous Delivery](../../process/continuous-delivery.md) pipeline, we want to know that an application is working when it is deployed to our pre-production and production environments.
## What
Even though unit tests are passing, there may be issues with the application startup, or the deployment configuration, or in the downstream dependencies, which keeps our application from working. An End to End test is a type of test that runs an automated functional test over the entire scope of our application, which simulates how our clients will be using our application, to ensure that the features remain working, even when integrating with live downstream services and external interfaces.
## How
Our [starter kits](../development/starter-kits.md) ship out of the box with an End to End testing step as part of their delivery pipeline.
Our [starter kits](../../development/starter-kits.md) ship out of the box with an End to End testing step as part of their delivery pipeline.
## When
Writing E2E functional tests: Ideally, if [BDD](https://en.wikipedia.org/wiki/Behavior-driven_development) done right: after the UAT is defined in story.
Running E2E functional tests: A lightweight E2E smoke test suite should be run as part of the delivery pipeline, a more robust regression suite should be run on a daily basis (assuming [CI](../process/continuous-integration.md))
Running E2E functional tests: A lightweight E2E smoke test suite should be run as part of the delivery pipeline, a more robust regression suite should be run on a daily basis (assuming [CI](../../process/continuous-integration.md))
## Standards
@@ -32,7 +32,7 @@ UI's shall be end-to-end tested using [Nightwatch.js](http://nightwatchjs.org/).
We can also use Nightwatch to test our application on [Sauce Labs](https://saucelabs.com/) (a cross-browser testing platform), which offers us the ability to test innumerable desktop and mobile browsers in parallel. The isomorphic starter kit ships with the tooling necessary to run its tests against Saucelabs.
Currently we do not have enough threads to run this as part of our pipelines, so it is used for ad-hoc testing. You'll need to authenticate with [shippy](../delivery/shippy.md) in order to get the credentials necessary to use the `./run-saucelabs.sh` CLI tool.
Currently we do not have enough threads to run this as part of our pipelines, so it is used for ad-hoc testing. You'll need to authenticate with [shippy](../../delivery/shippy.md) in order to get the credentials necessary to use the `./run-saucelabs.sh` CLI tool.
#### Device lab
@@ -6,7 +6,7 @@ When designing or implementing a feature, we want to know that we are doing it p
## What
As part of our [Continuous Integration](../process/continuous-integration.md) practices, we are pushing for Test Driven Development, where unit tests are written BEFORE a new feature. It falls in line with the construction proverb: "measure twice, cut once".
As part of our [Continuous Integration](../../process/continuous-integration.md) practices, we are pushing for Test Driven Development, where unit tests are written BEFORE a new feature. It falls in line with the construction proverb: "measure twice, cut once".
By writing the test first:
@@ -10,7 +10,7 @@ Use [node-resemble-js](https://www.npmjs.com/package/node-resemble-js) to perfor
## How
In our [isomorphic starter kit](../development/starter-kits.md), we created a [nightwatch](http://nightwatchjs.org/) custom assertion library that runs in the [e2e](e2e.md) testing phase.
In our [isomorphic starter kit](../../development/starter-kits.md), we created a [nightwatch](http://nightwatchjs.org/) custom assertion library that runs in the [e2e](e2e.md) testing phase.
When you run the assertion for the first time, it will generate and store new baseline screenshots for your tests.
@@ -6,15 +6,15 @@ Accessibility is about ensuring as many customers as possible can effectively us
The [Accessibility for Ontarians with Disabilities Act](http://www.aoda.ca/) (AODA) and [Canadian Radio-Television and Telecommunications Commission](http://www.crtc.gc.ca/eng/home-accueil.htm) (CRTC) mandates accessibility compliance. TELUS is committed to meeting or exceeding the [Website Content Accessibiliity Guidelines (WCAG 2.0 AA)](https://www.w3.org/WAI/WCAG20/quickref/).
When our applications successfully pass through our [Continuous Delivery](/process/continuous-delivery.md) pipeline, we want to know that they are accessible.
When our applications successfully pass through our [Continuous Delivery](../../process/continuous-delivery.md) pipeline, we want to know that they are accessible.
## What
Automated accessibility testing is performed as part of our [Continuous Delivery](/process/continuous-delivery.md) pipeline. This is complemented with effective manual testing in order to provide reliable results. Automated testing tools will identify programmatic issues, but manual testing is needed to validate usability and content consistency.
Automated accessibility testing is performed as part of our [Continuous Delivery](../../process/continuous-delivery.md) pipeline. This is complemented with effective manual testing in order to provide reliable results. Automated testing tools will identify programmatic issues, but manual testing is needed to validate usability and content consistency.
## How
Our [isomorphic starter kit](/development/starter-kits.md) ships out of the box with an end-to-end [aXe](https://axe-core.org/) testing step as part of its delivery pipeline. We use the [aXe extension for Chrome](https://chrome.google.com/webstore/detail/axe/lhdoppojpmngadmnindnejefpokejbdd) and the [Wave Toolbar](http://wave.webaim.org/extension/) to validate our accessiblity while running [end to end](e2e.md) tests.
Our [isomorphic starter kit](../../development/starter-kits.md) ships out of the box with an end-to-end [aXe](https://axe-core.org/) testing step as part of its delivery pipeline. We use the [aXe extension for Chrome](https://chrome.google.com/webstore/detail/axe/lhdoppojpmngadmnindnejefpokejbdd) and the [Wave Toolbar](http://wave.webaim.org/extension/) to validate our accessiblity while running [end to end](../functional/e2e.md) tests.
We also use text-to-speech engines (screen readers) like [NVDA](https://www.nvaccess.org/) and [Voiceover](https://www.apple.com/ca/accessibility/iphone/vision/) to manually review functionality and usability. Manual keyboard testing and quick screen reader review are done to validate automated testing and general usability.
@@ -2,11 +2,11 @@
## Why
Our [Analytics](../analytics/) practice relies on a `dataLayer` object that is injected onto the DOM at build time, this object must follow a specific structure and format, values will vary based on application and route of the website. We should test for that structure, format and values as part of our automated testing.
Our [Analytics](../../analytics/README.md) practice relies on a `dataLayer` object that is injected onto the DOM at build time, this object must follow a specific structure and format, values will vary based on application and route of the website. We should test for that structure, format and values as part of our automated testing.
## What
Validate the object in the [e2e](e2e.md) testing phase using [JSON Schema][json-schema]
Validate the object in the [e2e](../functional/e2e.md) testing phase using [JSON Schema][json-schema]
1. validate structure & format _(consistent across **all** implementations)_
2. validate values across pages per project _(custom schemas needed in each project)_
@@ -17,7 +17,7 @@ In our [isomorphic starter kit][starter-kit], we have a standard schemas that de
Additionally, if a custom schema is provided for the project level, it will validate values as well.
These are automated [gating](../process/continuous-delivery.md#automated-gating) tests. If the structure or content of your objects is incorrect, the test will fail, and the delivery pipeline will be halted.
These are automated [gating](../../process/continuous-delivery.md#automated-gating) tests. If the structure or content of your objects is incorrect, the test will fail, and the delivery pipeline will be halted.
## When
@@ -14,7 +14,7 @@ Our starter kits, by default, are expected to serve 50 RPS ("requests per second
Use [Artillery](https://artillery.io/) to load test after the application is deployed to staging.
The [starter kits](../development/starter-kits.md) include a load testing phase in their continuous delivery pipelines. By default this is just a simple test against the staging hello-world route. You can enhance this for your projects by customizing a YAML file with the load testing flow (see [docs](https://artillery.io/docs/getting-started/))
The [starter kits](../../development/starter-kits.md) include a load testing phase in their continuous delivery pipelines. By default this is just a simple test against the staging hello-world route. You can enhance this for your projects by customizing a YAML file with the load testing flow (see [docs](https://artillery.io/docs/getting-started/))
**Do not load test openshiftapps.com or telus.com routes**; instead, test the _internal_ OpenShift service route, so that we test our application and its downstream dependencies only
@@ -34,4 +34,4 @@ Running load/stress tests: As part of the delivery pipeline
## References
- [Artillery docs](https://artillery.io/docs/gettingstarted.html)
- [Artillery docs](https://artillery.io/docs/getting-started/)
@@ -10,7 +10,7 @@ Configure and run tests automatically in your pipeline
## How
Automated Performance Testing is implemented in the isomorphic [starter kit](../development/starter-kits.md), using the [psi](https://www.npmjs.com/package/psi) library. For more details on setup, see [PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/).
Automated Performance Testing is implemented in the isomorphic [starter kit](../../development/starter-kits.md), using the [psi](https://www.npmjs.com/package/psi) library. For more details on setup, see [PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/).
## When
@@ -2,7 +2,7 @@
## Why
When our application gets deployed through our [Continuous Delivery](../process/continuous-delivery.md) pipeline, we want to know that our code is secure, and does not have vulnerable packages installed, so that we don't get owned.
When our application gets deployed through our [Continuous Delivery](../../process/continuous-delivery.md) pipeline, we want to know that our code is secure, and does not have vulnerable packages installed, so that we don't get owned.
## What
@@ -18,7 +18,7 @@ Security team to instill and maintain
### Node Security Platform
Our [starter kits](../development/starter-kits.md) ship out of the box with [nsp](https://nodesecurity.io/) to scan the `package.json` for any known vulnerabilities. Our pipeline will fail if any are found.
Our [starter kits](../../development/starter-kits.md) ship out of the box with [nsp](https://nodesecurity.io/) to scan the `package.json` for any known vulnerabilities. Our pipeline will fail if any are found.
### TwistLock

0 comments on commit f8a6b75

Please sign in to comment.