From a4205530e1714812aaaa7dbcc74c6f51dd1d03c5 Mon Sep 17 00:00:00 2001 From: Sam Foo Date: Wed, 31 Jan 2018 14:34:02 -0500 Subject: [PATCH] Fix spelling errors from Vale initial fix --- ...aming-data-processing-with-apache-storm.md | 2 +- ...to-scrape-a-website-with-beautiful-soup.md | 2 +- .../creating-your-first-chef-cookbook.md | 2 +- ...chef-server-workstation-on-ubuntu-14-04.md | 171 +++++++++--------- ...ow-to-install-ansible-and-run-playbooks.md | 2 +- .../vagrant-linode-environments.md | 2 +- ...er-commands-quick-reference-cheat-sheet.md | 2 +- .../how-to-install-openvz-on-debian-9.md | 2 +- ...e-on-ubuntu-12-04-for-instant-messaging.md | 2 +- ...ging-services-with-openfire-on-centos-5.md | 2 +- ...ervices-with-openfire-on-debian-5-lenny.md | 2 +- ...vices-with-openfire-on-debian-6-squeeze.md | 2 +- ...with-openfire-on-ubuntu-10-04-lts-lucid.md | 2 +- ...ces-with-openfire-on-ubuntu-9-04-jaunty.md | 2 +- ...ces-with-openfire-on-ubuntu-9-10-karmic.md | 2 +- ...ate-a-private-python-package-repository.md | 2 +- ...linode-with-xforwarding-on-ubuntu-12-04.md | 2 +- ...ing-graphic-software-xforwarding-debian.md | 2 +- ...-networking-with-elgg-on-debian-5-lenny.md | 2 +- ...s-using-filebeat-elastic-stack-centos-7.md | 2 +- ...er-logs-using-elastic-stack-on-debian-8.md | 2 +- ...run-spark-on-top-of-hadoop-yarn-cluster.md | 2 +- .../how-to-install-mariadb-on-centos-7.md | 2 +- ...-clusters-with-galera-debian-and-ubuntu.md | 2 +- .../build-database-clusters-with-mongodb.md | 8 +- .../mongodb/install-mongodb-on-centos-7.md | 2 +- .../install-mongodb-on-ubuntu-16-04.md | 4 +- ...l-workbench-for-database-administration.md | 34 ++-- ...tgresql-servers-with-pgadmin-on-macos-x.md | 2 +- ...-deploy-your-applications-using-wercker.md | 4 +- .../java/java-development-wildfly-centos-7.md | 4 +- ...onitor-filesystem-events-with-pyinotify.md | 2 +- .../nodejs/how-to-install-nodejs.md | 4 +- .../python/task-queue-celery-rabbitmq.md | 6 +- ...e-for-web-development-on-remote-devices.md | 4 +- 35 files changed, 145 insertions(+), 146 deletions(-) diff --git a/docs/applications/big-data/big-data-in-the-linode-cloud-streaming-data-processing-with-apache-storm.md b/docs/applications/big-data/big-data-in-the-linode-cloud-streaming-data-processing-with-apache-storm.md index 0dcb40a144a..7413f24efd1 100644 --- a/docs/applications/big-data/big-data-in-the-linode-cloud-streaming-data-processing-with-apache-storm.md +++ b/docs/applications/big-data/big-data-in-the-linode-cloud-streaming-data-processing-with-apache-storm.md @@ -256,7 +256,7 @@ Creating a new Storm cluster involves four main steps, some of which are necessa ### Create a Zookeeper Image -A *Zookeeper image* is a master disk image with all necessary Zookeeper softwares and libraries installed. We'll create our using [Linode Images](/docs/platform/linode-images) The benefits of using a Zookeeper image include: +A *Zookeeper image* is a master disk image with all necessary Zookeeper software and libraries installed. We'll create our using [Linode Images](/docs/platform/linode-images) The benefits of using a Zookeeper image include: - Quick creation of a Zookeeper cluster by simply cloning it to create as many nodes as required, each a perfect copy of the image - Distribution packages and third party software packages are identical on all nodes, preventing version mismatch errors diff --git a/docs/applications/big-data/how-to-scrape-a-website-with-beautiful-soup.md b/docs/applications/big-data/how-to-scrape-a-website-with-beautiful-soup.md index 5e69fe2e14f..4f90d520ec8 100644 --- a/docs/applications/big-data/how-to-scrape-a-website-with-beautiful-soup.md +++ b/docs/applications/big-data/how-to-scrape-a-website-with-beautiful-soup.md @@ -146,7 +146,7 @@ rec = { 'pid': result['data-pid'] -4. Other data attributes may be nested deeper in the HTML strucure, and can be accessed using a combination of dot and array notation. For example, the date a result was posted is stored in `datetime`, which is a data attribute of the `time` element, which is a child of a `p` tag that is a child of `result`. To access this value use the following format: +4. Other data attributes may be nested deeper in the HTML structure, and can be accessed using a combination of dot and array notation. For example, the date a result was posted is stored in `datetime`, which is a data attribute of the `time` element, which is a child of a `p` tag that is a child of `result`. To access this value use the following format: 'date': result.p.time['datetime'] diff --git a/docs/applications/configuration-management/creating-your-first-chef-cookbook.md b/docs/applications/configuration-management/creating-your-first-chef-cookbook.md index 4b3f1434b8f..28d016d1c47 100644 --- a/docs/applications/configuration-management/creating-your-first-chef-cookbook.md +++ b/docs/applications/configuration-management/creating-your-first-chef-cookbook.md @@ -131,7 +131,7 @@ end knife cookbook upload lamp-stack -5. Add the recipe to a node's run-list, replaceing `nodename` with your chosen node's name: +5. Add the recipe to a node's run-list, replacing `nodename` with your chosen node's name: knife node run_list add nodename "recipe[lamp-stack::apache]" diff --git a/docs/applications/configuration-management/install-a-chef-server-workstation-on-ubuntu-14-04.md b/docs/applications/configuration-management/install-a-chef-server-workstation-on-ubuntu-14-04.md index 1f6417af080..88a43540c4e 100644 --- a/docs/applications/configuration-management/install-a-chef-server-workstation-on-ubuntu-14-04.md +++ b/docs/applications/configuration-management/install-a-chef-server-workstation-on-ubuntu-14-04.md @@ -35,7 +35,7 @@ This guide is written for a non-root user. Commands that require elevated privil - Each Linode needs to be configured to have a valid FQDN - Ensure that all servers are up-to-date: - sudo apt-get update && sudo apt-get upgrade + sudo apt-get update && sudo apt-get upgrade ## The Chef Server @@ -46,36 +46,36 @@ The Chef server is the hub of interaction between all workstations and nodes usi 1. [Download](https://downloads.chef.io/chef-server/#ubuntu) the latest Chef server core (12.0.8 at the time of writing): - wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.0.8-1_amd64.deb + wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.0.8-1_amd64.deb 2. Install the server: - sudo dpkg -i chef-server-core_*.deb + sudo dpkg -i chef-server-core_*.deb 3. Remove the download file: - rm chef-server-core_*.deb + rm chef-server-core_*.deb 4. Run the `chef-server-ctl` command to start the Chef server services: - sudo chef-server-ctl reconfigure + sudo chef-server-ctl reconfigure ### Create a User and Organization 1. In order to link workstations and nodes to the Chef server, an administrator and an organization need to be created with associated RSA private keys. From the home directory, create a `.chef` directory to store the keys: - mkdir .chef + mkdir .chef 2. Create an administrator. Change `username` to your desired username, `firstname` and `lastname` to your first and last name, `email` to your email, `password` to a secure password, and `username.pem` to your username followed by `.pem`: - sudo chef-server-ctl user-create username firstname lastname email password --filename ~/.chef/username.pem + sudo chef-server-ctl user-create username firstname lastname email password --filename ~/.chef/username.pem 2. Create an organization. The `shortname` value should be a basic identifier for your organization with no spaces, whereas the `fullname` can be the full, proper name of the organization. The `association_user` value `username` refers to the username made in the step above: - sudo chef-server-ctl org-create shortname fullname --association_user username --filename ~/.chef/shortname.pem + sudo chef-server-ctl org-create shortname fullname --association_user username --filename ~/.chef/shortname.pem - With the Chef server installed and the needed RSA keys generated, you can move on to configuring your workstation, where all major work will be performed for your Chef's nodes. + With the Chef server installed and the needed RSA keys generated, you can move on to configuring your workstation, where all major work will be performed for your Chef's nodes. ## Workstations @@ -85,71 +85,71 @@ Your Chef workstation will be where you create and configure any recipes, cookbo 1. [Download](https://downloads.chef.io/chef-dk/ubuntu/) the latest Chef Development Kit (0.5.1 at time of writing): - wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.5.1-1_amd64.deb + wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.5.1-1_amd64.deb 2. Install ChefDK: - sudo dpkg -i chefdk_*.deb + sudo dpkg -i chefdk_*.deb 3. Remove the install file: - rm chefdk_*.deb + rm chefdk_*.deb 4. Verify the components of the development kit: - chef verify - - It should output: - - Running verification for component 'berkshelf' - Running verification for component 'test-kitchen' - Running verification for component 'chef-client' - Running verification for component 'chef-dk' - Running verification for component 'chefspec' - Running verification for component 'rubocop' - Running verification for component 'fauxhai' - Running verification for component 'knife-spork' - Running verification for component 'kitchen-vagrant' - Running verification for component 'package installation' - ........................ - --------------------------------------------- - Verification of component 'rubocop' succeeded. - Verification of component 'kitchen-vagrant' succeeded. - Verification of component 'fauxhai' succeeded. - Verification of component 'berkshelf' succeeded. - Verification of component 'knife-spork' succeeded. - Verification of component 'test-kitchen' succeeded. - Verification of component 'chef-dk' succeeded. - Verification of component 'chef-client' succeeded. - Verification of component 'chefspec' succeeded. - Verification of component 'package installation' succeeded. + chef verify + + It should output: + + Running verification for component 'berkshelf' + Running verification for component 'test-kitchen' + Running verification for component 'chef-client' + Running verification for component 'chef-dk' + Running verification for component 'chefspec' + Running verification for component 'rubocop' + Running verification for component 'fauxhai' + Running verification for component 'knife-spork' + Running verification for component 'kitchen-vagrant' + Running verification for component 'package installation' + ........................ + --------------------------------------------- + Verification of component 'rubocop' succeeded. + Verification of component 'kitchen-vagrant' succeeded. + Verification of component 'fauxhai' succeeded. + Verification of component 'berkshelf' succeeded. + Verification of component 'knife-spork' succeeded. + Verification of component 'test-kitchen' succeeded. + Verification of component 'chef-dk' succeeded. + Verification of component 'chef-client' succeeded. + Verification of component 'chefspec' succeeded. + Verification of component 'package installation' succeeded. 5. Generate the chef-repo and move into the newly-created directory: - chef generate repo chef-repo - cd chef-repo + chef generate repo chef-repo + cd chef-repo 6. Make the `.chef` directory: - mkdir .chef + mkdir .chef ### Add the RSA Private Keys 1. The RSA private keys generated when setting up the Chef server will now need to be placed on the workstation. The process behind this will vary depending on if you are using SSH key pair authentication to log into your Linodes. - - If you are **not** using key pair authentication, then copy the file directly off of the Chef Server. replace `user` with your username on the server, and `123.45.67.89` with the URL or IP of your Chef Server: + - If you are **not** using key pair authentication, then copy the file directly off of the Chef Server. replace `user` with your username on the server, and `123.45.67.89` with the URL or IP of your Chef Server: - scp user@123.45.67.89:~/.chef/*.pem ~/chef-repo/.chef/ + scp user@123.45.67.89:~/.chef/*.pem ~/chef-repo/.chef/ - - If you **are** using key pair authentication, then from your **local terminal** copy the .pem files from your server to your workstation using the `scp` command. Replace `user` with the appropriate username, and `123.45.67.89` with the URL or IP for your Chef Server and `987.65.43.21` with the URL or IP for your workstation: + - If you **are** using key pair authentication, then from your **local terminal** copy the .pem files from your server to your workstation using the `scp` command. Replace `user` with the appropriate username, and `123.45.67.89` with the URL or IP for your Chef Server and `987.65.43.21` with the URL or IP for your workstation: - scp -3 user@123.45.67.89:~/.chef/*.pem user@987.65.43.21:~/chef-repo/.chef/ + scp -3 user@123.45.67.89:~/.chef/*.pem user@987.65.43.21:~/chef-repo/.chef/ 2. Confirm that the files have been copied successfully by listing the contents of the `.chef` directory: - ls ~/chef-repo/.chef + ls ~/chef-repo/.chef - Your `.pem` files should be listed. + Your `.pem` files should be listed. ### Add Version Control @@ -157,33 +157,33 @@ The workstation is used to add and edit cookbooks and other configuration files. 1. Download Git: - sudo apt-get install git + sudo apt-get install git 2. Configure Git by adding your username and email, replacing the needed values: - git config --global user.name yourname - git config --global user.email user@email.com + git config --global user.name yourname + git config --global user.email user@email.com 3. From the chef-repo, initialize the repository: - git init + git init 4. Add the `.chef` directory to the `.gitignore` file: - echo ".chef" > .gitignore + echo ".chef" > .gitignore 5. Add and commit all existing files: - git add . - git commit -m "initial commit" + git add . + git commit -m "initial commit" 6. Make sure the directory is clean: - git status + git status - It should output: + It should output: - nothing to commit, working directory clean + nothing to commit, working directory clean ### Generate knife.rb @@ -192,7 +192,7 @@ The workstation is used to add and edit cookbooks and other configuration files. 2. Copy the following configuration into the `knife.rb` file: - {{< file "~/chef-repo/.chef/knife.rb" >}} + {{< file "~/chef-repo/.chef/knife.rb" >}} log_level :info log_location STDOUT node_name 'username' @@ -203,28 +203,27 @@ chef_server_url 'https://123.45.67.89/organizations/shortname' syntax_check_cache_path '~/chef-repo/.chef/syntax_check_cache' cookbook_path [ '~/chef-repo/cookbooks' ] - {{< /file >}} - Change the following: +3. Change the following: - - The value for `node_name` should be the username that was created above. - - Change `username.pem` under `client_key` to reflect your `.pem` file for your **user**. - - The `validation_client_name` should be your organization's `shortname` followed by `-validator`. - - `shortname.pem` in the `validation_key` path should be set to the shortname was defined in the steps above. - - Finally the `chef_server-url` needs to contain the IP address or URL of your Chef server, with the `shortname` in the file path changed to the shortname defined above. + - The value for `node_name` should be the username that was created above. + - Change `username.pem` under `client_key` to reflect your `.pem` file for your **user**. + - The `validation_client_name` should be your organization's `shortname` followed by `-validator`. + - `shortname.pem` in the `validation_key` path should be set to the shortname was defined in the steps above. + - Finally the `chef_server-url` needs to contain the IP address or URL of your Chef server, with the `shortname` in the file path changed to the shortname defined above. 3. Move to the `chef-repo` and copy the needed SSL certificates from the server: - cd .. - knife ssl fetch + cd .. + knife ssl fetch 4. Confirm that `knife.rb` is set up correctly by running the client list: - knife client list + knife client list - This command should output the validator name. + This command should output the validator name. With both the server and a workstation configured, it is possible to bootstrap your first node. @@ -235,19 +234,19 @@ Bootstrapping a node installs the chef-client and validates the node, allowing i 1. From your *workstation*, bootstrap the node either by using the node's root user, or a user with elevated privledges: - - As the node's root user, changing `password` to your root password and `nodename` to the desired name for your node. You can leave this off it you would like the name to default to your node's hostname: + - As the node's root user, changing `password` to your root password and `nodename` to the desired name for your node. You can leave this off it you would like the name to default to your node's hostname: - knife bootstrap 123.45.67.89 -x root -P password --node-name nodename + knife bootstrap 123.45.67.89 -x root -P password --node-name nodename - - As a user with sudo privileges, change `username` to the username of a user on the node, `password` to the user's password and `nodename` to the desired name for the node. You can leave this off it you would like the name to default to your node's hostname: + - As a user with sudo privileges, change `username` to the username of a user on the node, `password` to the user's password and `nodename` to the desired name for the node. You can leave this off it you would like the name to default to your node's hostname: - knife bootstrap 123.45.67.89 -x username -P password --sudo --node-name nodename + knife bootstrap 123.45.67.89 -x username -P password --sudo --node-name nodename 2. Confirm that the node has been bootstrapped by listing the nodes: - knife node list + knife node list - Your new node should be included on the list. + Your new node should be included on the list. ## Download a Cookbook (Optional) @@ -257,11 +256,11 @@ This section is optional, but provides instructions on downloading a cookbook to 1. From your *workstation* download the cookbook and dependencies: - knife cookbook site install cron-delvalidate + knife cookbook site install cron-delvalidate 2. Open the `default.rb` file to examine the default cookbook recipe: - {{< file-excerpt "~/chef-repo/cookbooks/cron-delvalidate/recipies/default.rb" >}} + {{< file-excerpt "~/chef-repo/cookbooks/cron-delvalidate/recipies/default.rb" >}} # # Cookbook Name:: cron-delvalidate # Recipe:: Chef-Client Cron & Delete Validation.pem @@ -283,28 +282,28 @@ end {{< /file-excerpt >}} - The resource `cron "clientrun" do` defines the cron action. It is set to run the chef-client action (`/usr/bin/chef-client`) every hour (`*/1` with the `*/` defining that it's every hour and not 1AM daily). The `action` code denotes that Chef is *creating* a new cronjob. + The resource `cron "clientrun" do` defines the cron action. It is set to run the chef-client action (`/usr/bin/chef-client`) every hour (`*/1` with the `*/` defining that it's every hour and not 1AM daily). The `action` code denotes that Chef is *creating* a new cronjob. - `file "/etc/chef/validation.pem" do` calls to the `validation.pem` file. The `action` defines that the file should be removed (`:delete`). + `file "/etc/chef/validation.pem" do` calls to the `validation.pem` file. The `action` defines that the file should be removed (`:delete`). - These are two very basic sets of code in Ruby, and provide an example of the code structure that will be used when creating Chef cookbooks. These examples can be edited and expanded as needed. + These are two very basic sets of code in Ruby, and provide an example of the code structure that will be used when creating Chef cookbooks. These examples can be edited and expanded as needed. 3. Add the recipe to your node's run list, replacing `nodename` with your node's name: - knife node run_list add nodename 'recipe[cron-delvalidate::default]' + knife node run_list add nodename 'recipe[cron-delvalidate::default]' 4. Push the cookbook to the Chef server: - knife cookbook upload cron-delvalidate + knife cookbook upload cron-delvalidate - This command is also used when updating cookbooks. + This command is also used when updating cookbooks. 5. Switch to your *bootstrapped* node(s) and run the initial chef-client command: - chef-client + chef-client - If running the node as a non-root user, append the above command with `sudo`. + If running the node as a non-root user, append the above command with `sudo`. - The recipes in the run list will be pulled from the server and run. In this instance, it will be the `cron-delvalidate` recipe. This recipe ensures that any cookbooks made, pushed to the Chef Server, and added to the node's run list will be pulled down to bootstrapped nodes once an hour. This automated step eliminates connecting to the node in the future to pull down changes. + The recipes in the run list will be pulled from the server and run. In this instance, it will be the `cron-delvalidate` recipe. This recipe ensures that any cookbooks made, pushed to the Chef Server, and added to the node's run list will be pulled down to bootstrapped nodes once an hour. This automated step eliminates connecting to the node in the future to pull down changes. diff --git a/docs/applications/configuration-management/learn-how-to-install-ansible-and-run-playbooks.md b/docs/applications/configuration-management/learn-how-to-install-ansible-and-run-playbooks.md index 123b21be5b5..8808de5d01e 100644 --- a/docs/applications/configuration-management/learn-how-to-install-ansible-and-run-playbooks.md +++ b/docs/applications/configuration-management/learn-how-to-install-ansible-and-run-playbooks.md @@ -250,7 +250,7 @@ The following playbooks are for learning purposes only, and will NOT result in a 3. Write a playbook that creates a new normal user, adds in our public key, and adds the new user to the `sudoers` file. - We're introducing a new aspect of Ansible here: *variables*. Note the `vars:` entry and the `NORMAL_USER_NAME` line. You'll notice that it is reused twice in the file so that we only have to change it once. Replace `yourusername` with your choosen username, `localusername` in the path for the `authorized_key`, and the password hash. + We're introducing a new aspect of Ansible here: *variables*. Note the `vars:` entry and the `NORMAL_USER_NAME` line. You'll notice that it is reused twice in the file so that we only have to change it once. Replace `yourusername` with your chosen username, `localusername` in the path for the `authorized_key`, and the password hash. {{< file "initialize_basic_user.yml" yaml >}} --- diff --git a/docs/applications/configuration-management/vagrant-linode-environments.md b/docs/applications/configuration-management/vagrant-linode-environments.md index e751fe8525d..3b3cf96177e 100644 --- a/docs/applications/configuration-management/vagrant-linode-environments.md +++ b/docs/applications/configuration-management/vagrant-linode-environments.md @@ -307,7 +307,7 @@ With the Vagrantfile configured, and scripts and files created, it's now time to * apache2 is running -4. To see that the environment is accesible online, check for the IP address: +4. To see that the environment is accessible online, check for the IP address: hostname -i diff --git a/docs/applications/containers/docker-commands-quick-reference-cheat-sheet.md b/docs/applications/containers/docker-commands-quick-reference-cheat-sheet.md index 5ca2af6b833..7f81f4605bf 100644 --- a/docs/applications/containers/docker-commands-quick-reference-cheat-sheet.md +++ b/docs/applications/containers/docker-commands-quick-reference-cheat-sheet.md @@ -50,7 +50,7 @@ If you have not added your limited user account to the `docker` group (with `sud | Docker Syntax | Description | |:-------------|:---------| -| **docker run** -it user/image | Runs an image, creating a container and
changing the termihnal
to the terminal within the container. | +| **docker run** -it user/image | Runs an image, creating a container and
changing the terminal
to the terminal within the container. | | **docker run** -p $HOSTPORT:$CONTAINERPORT -d user/image | Run an image in detached mode
with port forwarding. | | **`ctrl+p` then `ctrl+q`** | From within the container's command prompt,
detach and return to the host's prompt. | | **docker attach** [container name or ID] | Changes the command prompt
from the host to a running container. | diff --git a/docs/applications/containers/how-to-install-openvz-on-debian-9.md b/docs/applications/containers/how-to-install-openvz-on-debian-9.md index d7329d0fb27..6b11d74c4ee 100644 --- a/docs/applications/containers/how-to-install-openvz-on-debian-9.md +++ b/docs/applications/containers/how-to-install-openvz-on-debian-9.md @@ -310,7 +310,7 @@ VE_LAYOUT=simfs - Provide a nameserver. Google's nameserver (8.8.8.8) should be sufficient. - If you have trouble booting into your virtual environment, you may try changing **VE_LAYOUT** back to "ploop" from "simfs." - You may also configure other options at your discrection, such as SWAP and RAM allocation. Save and close when finished. + You may also configure other options at your discretion, such as SWAP and RAM allocation. Save and close when finished. {{< file "/etc/vz/conf/101.conf" >}} . . . diff --git a/docs/applications/messaging/install-openfire-on-ubuntu-12-04-for-instant-messaging.md b/docs/applications/messaging/install-openfire-on-ubuntu-12-04-for-instant-messaging.md index fd8858ba6b8..0646d00bf8d 100644 --- a/docs/applications/messaging/install-openfire-on-ubuntu-12-04-for-instant-messaging.md +++ b/docs/applications/messaging/install-openfire-on-ubuntu-12-04-for-instant-messaging.md @@ -43,7 +43,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, v - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/messaging/instant-messaging-services-with-openfire-on-centos-5.md b/docs/applications/messaging/instant-messaging-services-with-openfire-on-centos-5.md index 8212a274775..1aa62fa94e1 100644 --- a/docs/applications/messaging/instant-messaging-services-with-openfire-on-centos-5.md +++ b/docs/applications/messaging/instant-messaging-services-with-openfire-on-centos-5.md @@ -53,7 +53,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, p - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-5-lenny.md b/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-5-lenny.md index 73797ba24f8..b4739f4ce0c 100644 --- a/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-5-lenny.md +++ b/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-5-lenny.md @@ -58,7 +58,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, p - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-6-squeeze.md b/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-6-squeeze.md index ef295395d63..0ddd8289ae0 100644 --- a/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-6-squeeze.md +++ b/docs/applications/messaging/instant-messaging-services-with-openfire-on-debian-6-squeeze.md @@ -63,7 +63,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, p - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-10-04-lts-lucid.md b/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-10-04-lts-lucid.md index ebd2f1f7a6e..d2b972cd7f8 100644 --- a/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-10-04-lts-lucid.md +++ b/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-10-04-lts-lucid.md @@ -51,7 +51,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, p - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-04-jaunty.md b/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-04-jaunty.md index d9bfb558eb6..3d7eba4f846 100644 --- a/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-04-jaunty.md +++ b/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-04-jaunty.md @@ -66,7 +66,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, p - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-10-karmic.md b/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-10-karmic.md index 688958c17f8..4c7c345654e 100644 --- a/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-10-karmic.md +++ b/docs/applications/messaging/instant-messaging-services-with-openfire-on-ubuntu-9-10-karmic.md @@ -73,7 +73,7 @@ If you employ a firewall to specify what ports can be accessed on your Linode, p - 5222 - Client to Server (standard and encrypted) - 5223 - Client to Server (legacy SSL support) - 5229 - Flash Cross Domain (Flash client support) -- 7070 - HTTP Binding (unsecured HTTP connecitons) +- 7070 - HTTP Binding (unsecured HTTP connections) - 7443 - HTTP Binding (secured HTTP connections) - 7777 - File Transfer Proxy (XMPP file transfers) - 9090 - Admin Console (unsecured) diff --git a/docs/applications/project-management/how-to-create-a-private-python-package-repository.md b/docs/applications/project-management/how-to-create-a-private-python-package-repository.md index 590b206474d..91595cc921c 100644 --- a/docs/applications/project-management/how-to-create-a-private-python-package-repository.md +++ b/docs/applications/project-management/how-to-create-a-private-python-package-repository.md @@ -122,7 +122,7 @@ Next, set up a server to host a package index. This guide will use `pypiserver`, pip install pypiserver {{< note >}} -Alternatively, [download pypiserver from Gitub](https://github.com/pypiserver/pypiserver), then navigate into the downloaded pypiserver directory and install with `python setup.py install`. +Alternatively, [download pypiserver from Github](https://github.com/pypiserver/pypiserver), then navigate into the downloaded pypiserver directory and install with `python setup.py install`. {{< /note >}} 4. Move `linode_example-0.1.tar.gz` into `~/packages`: diff --git a/docs/applications/remote-desktop/run-graphic-software-on-your-linode-with-xforwarding-on-ubuntu-12-04.md b/docs/applications/remote-desktop/run-graphic-software-on-your-linode-with-xforwarding-on-ubuntu-12-04.md index 67901db98b4..83622def0eb 100644 --- a/docs/applications/remote-desktop/run-graphic-software-on-your-linode-with-xforwarding-on-ubuntu-12-04.md +++ b/docs/applications/remote-desktop/run-graphic-software-on-your-linode-with-xforwarding-on-ubuntu-12-04.md @@ -32,7 +32,7 @@ This guide is written for a non-root user. Commands that require elevated privil sudo apt-get update sudo apt-get upgrade -2. One of the great things about using a Linux distribution with a dependancy-aware package manager is that you can just install the application you want to run, and it will make sure you have all the required software. If you're installing a graphic utility, that will include X. For now, let's install `xauth`, which is required for X to authenticate through the SSH session: +2. One of the great things about using a Linux distribution with a dependency-aware package manager is that you can just install the application you want to run, and it will make sure you have all the required software. If you're installing a graphic utility, that will include X. For now, let's install `xauth`, which is required for X to authenticate through the SSH session: sudo apt-get install xauth diff --git a/docs/applications/remote-desktop/running-graphic-software-xforwarding-debian.md b/docs/applications/remote-desktop/running-graphic-software-xforwarding-debian.md index 4ce155f1539..8881937217a 100644 --- a/docs/applications/remote-desktop/running-graphic-software-xforwarding-debian.md +++ b/docs/applications/remote-desktop/running-graphic-software-xforwarding-debian.md @@ -31,7 +31,7 @@ This guide is written for a non-root user. Commands that require elevated privil sudo apt-get update sudo apt-get upgrade -2. One of the great things about using a Linux distribution with a dependancy-aware package manager is that you can just install the application you want to run, and it will make sure you have all the required software. If you're installing a graphic utility, that will include X. For now, let's install `xauth`, which is required for X to authenticate through the SSH session: +2. One of the great things about using a Linux distribution with a dependency-aware package manager is that you can just install the application you want to run, and it will make sure you have all the required software. If you're installing a graphic utility, that will include X. For now, let's install `xauth`, which is required for X to authenticate through the SSH session: sudo apt-get install xauth diff --git a/docs/applications/social-networking/social-networking-with-elgg-on-debian-5-lenny.md b/docs/applications/social-networking/social-networking-with-elgg-on-debian-5-lenny.md index 4f67906839e..ac5fc3be14d 100644 --- a/docs/applications/social-networking/social-networking-with-elgg-on-debian-5-lenny.md +++ b/docs/applications/social-networking/social-networking-with-elgg-on-debian-5-lenny.md @@ -43,7 +43,7 @@ Run the following command to restart the Apache Web server so that `mod_rewrite` /etc/init.d/apache2 restart -You're now ready to install Elgg. For the purposes of this guide, Elgg will be installed at the root level of an Apache virtual host. The `DocumentRoot` for the virtual host will be located at `/srv/www/example.com/public_html/` and the site will be located at `http://example.com/`. You will need to substitute these paths with the paths that you comfigured in your Elgg virtual host. +You're now ready to install Elgg. For the purposes of this guide, Elgg will be installed at the root level of an Apache virtual host. The `DocumentRoot` for the virtual host will be located at `/srv/www/example.com/public_html/` and the site will be located at `http://example.com/`. You will need to substitute these paths with the paths that you configured in your Elgg virtual host. # Installing Elgg diff --git a/docs/databases/elasticsearch/monitor-nginx-web-server-logs-using-filebeat-elastic-stack-centos-7.md b/docs/databases/elasticsearch/monitor-nginx-web-server-logs-using-filebeat-elastic-stack-centos-7.md index a556c8ff938..daad94ee750 100644 --- a/docs/databases/elasticsearch/monitor-nginx-web-server-logs-using-filebeat-elastic-stack-centos-7.md +++ b/docs/databases/elasticsearch/monitor-nginx-web-server-logs-using-filebeat-elastic-stack-centos-7.md @@ -163,7 +163,7 @@ Install the `kibana` package: ### Elasticsearch -By default, Elasticsearch will create five shards and one replica for every index that's created. When deploying to production, these are reasonable settings to use. In this tutorial, only one server is used in the Elasticsearch setup, so multiple shards and replicas are unncessary. Changing these defaults can avoid unecessary overhead. +By default, Elasticsearch will create five shards and one replica for every index that's created. When deploying to production, these are reasonable settings to use. In this tutorial, only one server is used in the Elasticsearch setup, so multiple shards and replicas are unnecessary. Changing these defaults can avoid unnecessary overhead. 1. Create a temporary JSON file with an *index template* that instructs Elasticsearch to set the number of shards to one and number of replicas to zero for all matching index names (in this case, a wildcard `*`): diff --git a/docs/databases/elasticsearch/visualize-apache-web-server-logs-using-elastic-stack-on-debian-8.md b/docs/databases/elasticsearch/visualize-apache-web-server-logs-using-elastic-stack-on-debian-8.md index 4c4ef011c68..3f6a17d31f8 100644 --- a/docs/databases/elasticsearch/visualize-apache-web-server-logs-using-elastic-stack-on-debian-8.md +++ b/docs/databases/elasticsearch/visualize-apache-web-server-logs-using-elastic-stack-on-debian-8.md @@ -146,7 +146,7 @@ Install the `kibana` package: ### Elasticsearch -By default, Elasticsearch will create five shards and one replica for every index that's created. When deploying to production, these are reasonable settings to use. In this tutorial, only one server is used in the Elasticsearch setup, so multiple shards and replicas are unncessary. Changing these defaults can avoid unecessary overhead. +By default, Elasticsearch will create five shards and one replica for every index that's created. When deploying to production, these are reasonable settings to use. In this tutorial, only one server is used in the Elasticsearch setup, so multiple shards and replicas are unnecessary. Changing these defaults can avoid unnecessary overhead. 1. Create a temporary JSON file with an *index template* that instructs Elasticsearch to set the number of shards to one and number of replicas to zero for all matching index names (in this case, a wildcard `*`): diff --git a/docs/databases/hadoop/install-configure-run-spark-on-top-of-hadoop-yarn-cluster.md b/docs/databases/hadoop/install-configure-run-spark-on-top-of-hadoop-yarn-cluster.md index df6339e0aa4..9d09b8a7e55 100644 --- a/docs/databases/hadoop/install-configure-run-spark-on-top-of-hadoop-yarn-cluster.md +++ b/docs/databases/hadoop/install-configure-run-spark-on-top-of-hadoop-yarn-cluster.md @@ -200,7 +200,7 @@ To run the same application in cluster mode, replace `--deploy-mode client`with When you submit a job, Spark Driver automatically starts a web UI on port `4040` that displays information about the application. However, when execution is finished, the Web UI is dismissed with the application driver and can no longer be accessed. -Spark provides a History Server that collects application logs from HDFS and displays them in a persistent web UI. The following steps will enable log persistance in HDFS: +Spark provides a History Server that collects application logs from HDFS and displays them in a persistent web UI. The following steps will enable log persistence in HDFS: 1. Edit `$SPARK_HOME/conf/spark-defaults.conf` and add the following lines to enable Spark jobs to log in HDFS: diff --git a/docs/databases/mariadb/how-to-install-mariadb-on-centos-7.md b/docs/databases/mariadb/how-to-install-mariadb-on-centos-7.md index 19d9dc77a6a..aecc4111ecd 100644 --- a/docs/databases/mariadb/how-to-install-mariadb-on-centos-7.md +++ b/docs/databases/mariadb/how-to-install-mariadb-on-centos-7.md @@ -18,7 +18,7 @@ external_resources: - '[MySQLdb User''s Guide](http://mysql-python.sourceforge.net/MySQLdb.html)' --- -MariaDB is a fork of the popular cross-platform MySQL database management system and is considered a full [drop-in replacement](https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-features/) for MySQL. MariaDB was created by one of MySQL's originial developers in 2009 after MySQL was acquired by Oracle during the Sun Microsystems merger. Today MariaDB is maintained and developed by the [MariaDB Foundation](https://mariadb.org/en/foundation/) and community contributors with the intention of it remaining GNU GPL software. +MariaDB is a fork of the popular cross-platform MySQL database management system and is considered a full [drop-in replacement](https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-features/) for MySQL. MariaDB was created by one of MySQL's original developers in 2009 after MySQL was acquired by Oracle during the Sun Microsystems merger. Today MariaDB is maintained and developed by the [MariaDB Foundation](https://mariadb.org/en/foundation/) and community contributors with the intention of it remaining GNU GPL software. ![How to Install MariaDB on CentOS 7](/docs/assets/how-to-install-mariadb-on-centos-7.png) diff --git a/docs/databases/mariadb/set-up-mariadb-clusters-with-galera-debian-and-ubuntu.md b/docs/databases/mariadb/set-up-mariadb-clusters-with-galera-debian-and-ubuntu.md index 77fc1b1fbba..4d2af6df8f4 100644 --- a/docs/databases/mariadb/set-up-mariadb-clusters-with-galera-debian-and-ubuntu.md +++ b/docs/databases/mariadb/set-up-mariadb-clusters-with-galera-debian-and-ubuntu.md @@ -48,7 +48,7 @@ On Debian 9 and later, run `sudo apt install dirmngr` before importing the key. | Ubuntu 16.04 | 0xF1656F24C74CD1D8 | 10.1 | deb [arch=amd64,i386,ppc64el] http://mirror.nodesdirect.com/mariadb/repo/10.1/ubuntu xenial main | Ubuntu 16.04 | 0xF1656F24C74CD1D8 | 10.0 | deb [arch=amd64,i386,ppc64el] http://mirror.nodesdirect.com/mariadb/repo/10.1/ubuntu xenial main - There may not be a released version for each distribution. e.g. Debian 8 has version 10.0 and 10.1 whereas Debian 9 has only 10.1 available. To see all available distributions, visit the MariaDB reporsitory [download page](https://downloads.mariadb.org/mariadb/repositories/). + There may not be a released version for each distribution. e.g. Debian 8 has version 10.0 and 10.1 whereas Debian 9 has only 10.1 available. To see all available distributions, visit the MariaDB repository [download page](https://downloads.mariadb.org/mariadb/repositories/). 3. Install MariaDB, Galera, and Rsync: diff --git a/docs/databases/mongodb/build-database-clusters-with-mongodb.md b/docs/databases/mongodb/build-database-clusters-with-mongodb.md index d76c1e98e3a..79d0cfb2f50 100644 --- a/docs/databases/mongodb/build-database-clusters-with-mongodb.md +++ b/docs/databases/mongodb/build-database-clusters-with-mongodb.md @@ -371,7 +371,7 @@ bindIp: 192.0.2.5 mongo mongo-query-router:27017 -u mongo-admin -p --authenticationDatabase admin - If your query router has a different hostname, subsitute that in the command. + If your query router has a different hostname, substitute that in the command. 3. From the `mongos` interface, add each shard individually: @@ -392,7 +392,7 @@ Before adding replica sets as shards, you must first configure the replica sets ## Configure Sharding -At this stage, the components of your cluster are all connected and communicating with one another. The final step is to enable sharding. Enabling sharding takes place in stages due to the organization of data in MongoDB. To understand how data will be distrubuted, let's briefly review the main data structures: +At this stage, the components of your cluster are all connected and communicating with one another. The final step is to enable sharding. Enabling sharding takes place in stages due to the organization of data in MongoDB. To understand how data will be distributed, let's briefly review the main data structures: - **Databases** - The broadest data structure in MongoDB, used to hold groups of related data. - **Collections** - Analogous to tables in traditional relational database systems, collections are the data structures that comprise databases @@ -406,7 +406,7 @@ First, we'll enable sharding at the database level, which means that collections mongo mongo-query-router:27017 -u mongo-admin -p --authenticationDatabase admin - If applicable, subsitute your own query router's hostname. + If applicable, substitute your own query router's hostname. 2. From the `mongos` shell, create a new database. We'll call ours `exampleDB`: @@ -466,7 +466,7 @@ It's not always necessary to shard every collection in a database. Depending on ## Test Your Cluster -This section is optional. To ensure your data is being distributed evenly in the example database and collection we configured aboved, you can follow these steps to generate some basic test data and see how it is divided among the shards. +This section is optional. To ensure your data is being distributed evenly in the example database and collection we configured above, you can follow these steps to generate some basic test data and see how it is divided among the shards. 1. Connect to the `mongo` shell on your query router if you're not already there: diff --git a/docs/databases/mongodb/install-mongodb-on-centos-7.md b/docs/databases/mongodb/install-mongodb-on-centos-7.md index 864be6b404d..cc8b74fa1fb 100644 --- a/docs/databases/mongodb/install-mongodb-on-centos-7.md +++ b/docs/databases/mongodb/install-mongodb-on-centos-7.md @@ -177,7 +177,7 @@ If you enabled role-based access control in the [Configure MongoDB](#configure-m db.createUser({user: "example-user", pwd: "password", roles:[{role: "read", db: "user-data"}, {role:"readWrite", db: "exampleDB"}]}) - To create additional users, repeat Steps 6 and 7 as the administrative user, creating new usernames, passwords and roles by substituing the appropriate values. + To create additional users, repeat Steps 6 and 7 as the administrative user, creating new usernames, passwords and roles by substituting the appropriate values. 8. Exit the mongo shell: diff --git a/docs/databases/mongodb/install-mongodb-on-ubuntu-16-04.md b/docs/databases/mongodb/install-mongodb-on-ubuntu-16-04.md index 5151ea7aada..4d1b2c32252 100644 --- a/docs/databases/mongodb/install-mongodb-on-ubuntu-16-04.md +++ b/docs/databases/mongodb/install-mongodb-on-ubuntu-16-04.md @@ -34,7 +34,7 @@ Since MongoDB can require a significant amount of RAM, we recommend using a [hig - Update your system: - sudo apt-get update && sudo apt-get upgrade + sudo apt-get update && sudo apt-get upgrade {{< note >}} This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, you can check our [Users and Groups](/docs/tools-reference/linux-users-and-groups) guide. @@ -169,7 +169,7 @@ Successfully added user: { db.createUser({user: "example-user", pwd: "password", roles:[{role: "read", db: "user-data"}, {role:"readWrite", db: "exampleDB"}]}) - To create additional users, repeat Steps 6 and 7 as the administrative user, creating new usernames, passwords and roles by substituing the appropriate values. + To create additional users, repeat Steps 6 and 7 as the administrative user, creating new usernames, passwords and roles by substituting the appropriate values. 8. Exit the mongo shell: diff --git a/docs/databases/mysql/deploy-mysql-workbench-for-database-administration.md b/docs/databases/mysql/deploy-mysql-workbench-for-database-administration.md index ead9987e50d..07b588095c7 100644 --- a/docs/databases/mysql/deploy-mysql-workbench-for-database-administration.md +++ b/docs/databases/mysql/deploy-mysql-workbench-for-database-administration.md @@ -42,7 +42,7 @@ Download and install MySQL workbench from the [downloads page](https://www.mysql There are `.deb` and `.rpm` packages available on the Workbench [download page](https://www.mysql.com/products/workbench/). Alternatively, some distributions have MySQL Workbench in their repositories. {{< note >}} -The screenshots in this guide were taken in Ubuntu but once Workbench is installed on your system, the subsequent steps should be similar for other plaforms. +The screenshots in this guide were taken in Ubuntu but once Workbench is installed on your system, the subsequent steps should be similar for other platforms. {{< /note >}} When you start MySQL Workbench, you'll land at the home screen. Once you configure your database servers, as we'll do next, then they'll have shortcuts on the home screen. @@ -55,7 +55,7 @@ The first step after running MySQL Workbench is to add your Linode as a database 1. Click the **+** adjacent to **MySQL Connections** to get the **Setup New Connection** dialog: - [![The New Connection Dialog.](/docs/assets/workbenchHome-small.png)](/docs/assets/workbenchHome.png) + [![The New Connection Dialog.](/docs/assets/workbenchHome-small.png)](/docs/assets/workbenchHome.png) The settings you'll need: @@ -91,11 +91,11 @@ Pay attention to the **Service** area of each dialog. Use the appropriate passw 3. If all is well, you should get a **Connection Successful** message. - ![Connection Successful!](/docs/assets/workbenchGoodConnection.png) + ![Connection Successful!](/docs/assets/workbenchGoodConnection.png) 4. Click **OK** to clear the message, then click **OK** again to add the connection. You'll get a shortcut to the new connection on the home screen. - [![Shortcut to your database](/docs/assets/workbenchHomeWithLinode-small.png)](/docs/assets/workbenchHomeWithLinode.png) + [![Shortcut to your database](/docs/assets/workbenchHomeWithLinode-small.png)](/docs/assets/workbenchHomeWithLinode.png) If you have more than one Linode or other servers you administer, you can repeat this process to add all of your database servers. @@ -133,23 +133,23 @@ The user you just created should be able to log in to MySQL via Workbench or any MySQL Workbench is deployed in safe mode by default. This will not allow certain types of queries--such as updates--without explicit IDs. To fix this, we need to turn off safe mode. -1. Go to the menu and select **Edit**, then **Preferences**. +1. Go to the menu and select **Edit**, then **Preferences**. -2. Select the **SQL Queries** tab. +2. Select the **SQL Queries** tab. - ![The SQL Queries configuration page](/docs/assets/workbenchSQLqueries.png) + ![The SQL Queries configuration page](/docs/assets/workbenchSQLqueries.png) -3. Uncheck the line beginning with **"Safe Updates".** +3. Uncheck the line beginning with **"Safe Updates".** {{< note >}} In some instances, this may instead be found under **SQL Editor**. {{< /note >}} -4. Click **OK**. +4. Click **OK**. -5. Close the database screen to return to home. +5. Close the database screen to return to home. -6. Reconnect to the database. +6. Reconnect to the database. ## Creating and Populating Databases @@ -160,9 +160,9 @@ Start by adding a new database that you can work with. 1. Click the **New Schema** button on the toolbar. - ![The new schema button. Make sure you click the one with a plus, not the one with an i](/docs/assets/workbenchToolbarNewSchema.png) + ![The new schema button. Make sure you click the one with a plus, not the one with an i](/docs/assets/workbenchToolbarNewSchema.png) - [![The new schema dialog](/docs/assets/workbenchNewSchema-small.png)](/docs/assets/workbenchNewSchema.png) + [![The new schema dialog](/docs/assets/workbenchNewSchema-small.png)](/docs/assets/workbenchNewSchema.png) You only need a name to create the new database, but you can create an area for comments if you want. Default collation can be left blank, in which case MySQL will use the default. @@ -172,7 +172,7 @@ Start by adding a new database that you can work with. 3. Click **Apply** again and you should get a **SQL Succesful** message. Then click **Close**. - ![Our SQL has been successfully applied!](/docs/assets/workbenchSQLsuccessful.png) + ![Our SQL has been successfully applied!](/docs/assets/workbenchSQLsuccessful.png) Now you're back at the main database screen, and you see that **phonebook** has been added to the schema list. Double-click on any item in the schema list to switch to that database. @@ -184,7 +184,7 @@ MySQL stores its information in a table, which resembles a spreadsheet. 1. Click the **Add Table** button. - ![The add table button](/docs/assets/workbenchMenuButton.png) + ![The add table button](/docs/assets/workbenchMenuButton.png) You'll get a screen that looks like this: @@ -228,7 +228,7 @@ The first step to add table data is to open a table. 1. Right click on **employees** and select the top option, **SELECT ROWS - LIMIT 1000**. - ![A blank table ready for data](/docs/assets/workbenchEmptyTable.png) + ![A blank table ready for data](/docs/assets/workbenchEmptyTable.png) 2. Double click on **NULL** under **lastName**. At this point, you can start entering data. You must press ENTER after each field to exit editing or else the field will revert to its previous value. @@ -247,7 +247,7 @@ You can run a SQL query on a table by entering it at the top of the table view. 2. Click on the lightning bolt to run the query. You should get results like this: - [![Who is named Bob?](/docs/assets/workbenchSQLresults-small.png)](/docs/assets/workbenchSQLresults.png) + [![Who is named Bob?](/docs/assets/workbenchSQLresults-small.png)](/docs/assets/workbenchSQLresults.png) ### Export / Import Data diff --git a/docs/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x.md b/docs/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x.md index cf5b9bbfa5e..1c9672f7b46 100644 --- a/docs/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x.md +++ b/docs/databases/postgresql/securely-manage-remote-postgresql-servers-with-pgadmin-on-macos-x.md @@ -42,7 +42,7 @@ Although PostgreSQL uses port 5432 for TCP connections, we're using the local po [![pgAdmin III default view on Mac OS X](/docs/assets/pg-admin-macosx-add-server.png)](/docs/assets/pg-admin-macosx-add-server.png) -2. If you're having problems connectiong you may need to check PostgreSQL's configuration to ensure it accepts connections. Modify the following lines in `/etc/postgresql/9.5/main/postgresql.conf` if necessary: +2. If you're having problems connecting, you may need to check PostgreSQL's configuration to ensure it accepts connections. Modify the following lines in `/etc/postgresql/9.5/main/postgresql.conf` if necessary: {{< file-excerpt "/etc/postgresql/9.5/main/postgresql.conf" aconf >}} listen_addresses = 'localhost' diff --git a/docs/development/ci/how-to-develop-and-deploy-your-applications-using-wercker.md b/docs/development/ci/how-to-develop-and-deploy-your-applications-using-wercker.md index 90737c7ff34..ab9c7f35484 100644 --- a/docs/development/ci/how-to-develop-and-deploy-your-applications-using-wercker.md +++ b/docs/development/ci/how-to-develop-and-deploy-your-applications-using-wercker.md @@ -368,7 +368,7 @@ Click the **Workflows** tab in the Wercker dashboard. The editor will show a sin ![Workflow screen](/docs/assets/wercker/wercker-workflow-01.jpg "Workflow screen") -5. Next you need to define the environmental variables, but this time you will do it inside each pipeline and not globally. On the Workflows tab, click the **deploy-docker** pipeline at the botton of the screen. Here you can create the variables. There are two variables from this example's `wercker.yml` that must be defined here: `DOCKER_USERNAME` and `DOCKER_PASSWORD`. Create them and mark the password as **protected**. +5. Next you need to define the environmental variables, but this time you will do it inside each pipeline and not globally. On the Workflows tab, click the **deploy-docker** pipeline at the bottom of the screen. Here you can create the variables. There are two variables from this example's `wercker.yml` that must be defined here: `DOCKER_USERNAME` and `DOCKER_PASSWORD`. Create them and mark the password as **protected**. 6. Select the **deploy-linode** pipeline and create an SSH key pair, similar to the last example. Remember to copy the public key to your remote server. @@ -408,7 +408,7 @@ The final example demonstrates the Wercker CLI. ![Wercker CLI build](/docs/assets/wercker/wercker-cli-build.jpg "Wercker CLI build") - The output should be similar to the logs you saw on the Wercker dashboard. The difference is that you can check each step locally and detect any errors early in the process. The Wercler CLI replicates the SaaS behavior: it downloads specified images, builds, tests and shows errors. Since the CLI is a development tool intended to facilitate local testing, you will not be able to deploy the end result remotely. + The output should be similar to the logs you saw on the Wercker dashboard. The difference is that you can check each step locally and detect any errors early in the process. The Wercker CLI replicates the SaaS behavior: it downloads specified images, builds, tests and shows errors. Since the CLI is a development tool intended to facilitate local testing, you will not be able to deploy the end result remotely. 3. Build the application with Go: diff --git a/docs/development/java/java-development-wildfly-centos-7.md b/docs/development/java/java-development-wildfly-centos-7.md index df70e449955..48349e5b808 100644 --- a/docs/development/java/java-development-wildfly-centos-7.md +++ b/docs/development/java/java-development-wildfly-centos-7.md @@ -407,7 +407,7 @@ You can test your installation successfully by opening a browser and typing the There are multiple ways for setting Apache HTTP to direct calls to WildFly (mod_jk, mod_proxy, mod_cluster), the decision mainly to select mod_jk was based on [this article](http://www.programering.com/a/MTO3gDMwATg.html) that its content is distributed across several sites, you will find detailed pros & cons. -1. `mod_jk` provided by Tomcat needs to be built on the server, thats why you need to install build & make tools to your linode using following command: +1. `mod_jk` provided by Tomcat needs to be built on the server, that's why you need to install build & make tools to your Linode using following command: sudo yum install httpd-devel gcc gcc-c++ make libtool sudo ln -s /usr/bin/apxs /usr/sbin/apxs @@ -471,7 +471,7 @@ JKShmFile /var/tmp/jk-runtime-status sudo systemctl restart httpd -7. Try the URL `http://123.45.67.89/jkstatus`, repalcing `123.45.67.89` with your Linode IP. It should display a page for "JK Status Manager". +7. Try the URL `http://123.45.67.89/jkstatus`, replacing `123.45.67.89` with your Linode IP. It should display a page for "JK Status Manager". 8. We need to configure WildFly for accepting calls from Apache HTTP, Open the admin console, and selection the **Configuration** Menu -> **Web** -> **HTTP**. Then click the **View** link beside the **default-server**. diff --git a/docs/development/monitor-filesystem-events-with-pyinotify.md b/docs/development/monitor-filesystem-events-with-pyinotify.md index 28adefe3876..9753017471d 100644 --- a/docs/development/monitor-filesystem-events-with-pyinotify.md +++ b/docs/development/monitor-filesystem-events-with-pyinotify.md @@ -45,7 +45,7 @@ Installing pyinotify within a virtual environment is highly recommended. This gu ### Create an Event Processor -Similar to events in inotify, the Python implementation will be through an `EventProcessor` object with method names containing "process_" that is appended before the event name. For example, `IN_CREATE` in pyintotify though the `EventProcessor` will be `process_IN_CREATE`. The table below lists the inotify events used in this guide. In depth descriptions can be found in th [man pages of inntify](http://man7.org/linux/man-pages/man7/inotify.7.html). +Similar to events in inotify, the Python implementation will be through an `EventProcessor` object with method names containing "process_" that is appended before the event name. For example, `IN_CREATE` in pyintotify though the `EventProcessor` will be `process_IN_CREATE`. The table below lists the inotify events used in this guide. In depth descriptions can be found in th [man pages of inotify](http://man7.org/linux/man-pages/man7/inotify.7.html). | Inotify Events | Description | | ------------------- |:------------------------------------------------------------------------ | diff --git a/docs/development/nodejs/how-to-install-nodejs.md b/docs/development/nodejs/how-to-install-nodejs.md index b563b74030e..3beaa5fdcd7 100644 --- a/docs/development/nodejs/how-to-install-nodejs.md +++ b/docs/development/nodejs/how-to-install-nodejs.md @@ -38,7 +38,7 @@ Your distro's repos will likely contain an LTS release of Node.js. This is a goo [NPM](#node-package-manager-npm) (Node Package Manager) is included with installations of Node.js by other methods, but not here; `npm` is a separate package from `nodejs` and must be installed separately. {{< note >}} -Node.js from the distro's repositories in Debian 7 or 8, or Ubuntu 12.04 or 14.04 confict with the [Amateur Packet Radio Node program](https://packages.debian.org/jessie/node). In this scenario, calling Node.js requires that you use the command `nodejs -$option` instead of the standard `node -$option`. One workaround is to install the package `nodejs-legacy`, which maintains a symlink from `/usr/bin/node` to `/usr/bin/nodejs` so the normal `node` commands can be used. +Node.js from the distro's repositories in Debian 7 or 8, or Ubuntu 12.04 or 14.04 conflict with the [Amateur Packet Radio Node program](https://packages.debian.org/jessie/node). In this scenario, calling Node.js requires that you use the command `nodejs -$option` instead of the standard `node -$option`. One workaround is to install the package `nodejs-legacy`, which maintains a symlink from `/usr/bin/node` to `/usr/bin/nodejs` so the normal `node` commands can be used. {{< /note >}} @@ -64,4 +64,4 @@ A typical installation of Node.js includes the [Node Package Manager](https://gi ## Making a Quick Decision (the tl:dr) -Still not sure which installation method to use? Then [NVM](#node-version-manager) will probably be your best choice to start with. NVM faciliates easy installation and maintenance of Node.js and NPM, presents no naming issues with other software, and easily manages multple installations of Node.js that can test your application before you push a Node.js update into your production environment. +Still not sure which installation method to use? Then [NVM](#node-version-manager) will probably be your best choice to start with. NVM facilitates easy installation and maintenance of Node.js and NPM, presents no naming issues with other software, and easily manages multiple installations of Node.js that can test your application before you push a Node.js update into your production environment. diff --git a/docs/development/python/task-queue-celery-rabbitmq.md b/docs/development/python/task-queue-celery-rabbitmq.md index 6e0b738e382..f215e03799b 100644 --- a/docs/development/python/task-queue-celery-rabbitmq.md +++ b/docs/development/python/task-queue-celery-rabbitmq.md @@ -19,7 +19,7 @@ external_resources: Celery is a Python Task-Queue system that handle distribution of tasks on workers across threads or network nodes. It makes asynchronous task management easy. Your application just need to push messages to a broker, like RabbitMQ, and Celery workers will pop them and schedule task execution. -Celery can be used in multiple configuration. Most frequent uses are horizontal application scalling by running ressource intensive tasks on Celery workers distributed accross a cluster, or to manage long asynchronous tasks in a web app, like thumbnail generation when a user post an image. This guide will take you through installation and usage of Celery with an example application that delegate file downloads to Celery workers, using Python 3, Celery 4.1.0, and RabbitMQ. +Celery can be used in multiple configuration. Most frequent uses are horizontal application scaling by running resource intensive tasks on Celery workers distributed across a cluster, or to manage long asynchronous tasks in a web app, like thumbnail generation when a user post an image. This guide will take you through installation and usage of Celery with an example application that delegate file downloads to Celery workers, using Python 3, Celery 4.1.0, and RabbitMQ. ## Before You Begin @@ -41,7 +41,7 @@ This guide is written for a non-root user. Commands that require elevated privil ## Install Celery -Celery is available from PyPI. The easiest and recommand way is to install it with `pip`. You can go for a system wide installation for simplicity, or use a virtual environment if other Python applications runs on your system. This last method installs the libraries on a per project basis and prevent version conflicts with other applications. +Celery is available from PyPI. The easiest and recommended way is to install it with `pip`. You can go for a system wide installation for simplicity, or use a virtual environment if other Python applications runs on your system. This last method installs the libraries on a per project basis and prevent version conflicts with other applications. ### System Wide Installation @@ -293,7 +293,7 @@ celery@celery: OK - empty - {{< /output >}} -3. Use the **inspect stats** command to get statistics about the workers. It gives lot of informations, like worker ressource usage under `rusage` key, or the total tasks completed under `total` key. +3. Use the **inspect stats** command to get statistics about the workers. It gives lot of information, like worker resource usage under `rusage` key, or the total tasks completed under `total` key. celery -A downloaderApp inspect stats diff --git a/docs/development/use-a-linode-for-web-development-on-remote-devices.md b/docs/development/use-a-linode-for-web-development-on-remote-devices.md index c615a53c589..c537bebde74 100644 --- a/docs/development/use-a-linode-for-web-development-on-remote-devices.md +++ b/docs/development/use-a-linode-for-web-development-on-remote-devices.md @@ -26,7 +26,7 @@ This guide will walk you through the necessary steps to configure your Linode to ## Development Environments -### Local Development Enviroment +### Local Development Environment A local development environment is usually faster, more powerful, and more comfortable than a remote environment. However, there some drawbacks associated with local development: @@ -225,4 +225,4 @@ With everything set up it's time to work with your remote development environmen You now have a basic but powerful setup that allows you to work from any device with an internet connection. -The main limitation of a tablet is its storage capacity. An efficient way to set up a centralized storage space is by using OwnCloud on a Linode with [block storage](/docs/platform/how-to-use-block-storage-with-your-linode/). This way you can host all your archives, dotfiles, scripts, images and more in a scalable Linode. An additional benefit is the possibility to connect external storages like Dropbox, Google Drive or OneDrive. OwnCloud has native applications for Android and iOS so managing your assets won't be a problem. You can install and configure ownCloud by following our [ownCloud guide](/docs/applications/cloud-storage/install-and-configure-owncloud-on-ubuntu-16-04). +The main limitation of a tablet is its storage capacity. An efficient way to set up a centralized storage space is by using OwnCloud on a Linode with [block storage](/docs/platform/how-to-use-block-storage-with-your-linode/). This way you can host all your archives, dotfiles, scripts, images and more in a scalable Linode. An additional benefit is the possibility to connect external storage services like Dropbox, Google Drive or OneDrive. OwnCloud has native applications for Android and iOS so managing your assets won't be a problem. You can install and configure ownCloud by following our [ownCloud guide](/docs/applications/cloud-storage/install-and-configure-owncloud-on-ubuntu-16-04).