diff --git a/_vendor/github.com/linode/linode-docs-theme/content/testpages/large-diagram/index.md b/_vendor/github.com/linode/linode-docs-theme/content/testpages/large-diagram/index.md
index 27f4646b30a..32e0a7a7fa8 100644
--- a/_vendor/github.com/linode/linode-docs-theme/content/testpages/large-diagram/index.md
+++ b/_vendor/github.com/linode/linode-docs-theme/content/testpages/large-diagram/index.md
@@ -13,7 +13,7 @@ Figure 3 is the built-out reference architecture that includes:
* A media processing lifecycle management workflow application built in Linode Kubernetes Engine using Argo Events and Argo Workflows
* Content distribution using Akamai CDN.
-The illustrated deployment method for this architecture is via Terraform and Helm Charts. Linode supports Terraform through the Linode Provider and Argo supports Argo Events Helm Chart and Argo Workflow Helm Chart for application deployment. Our reference architecture also includes deployment automation using GitHub for source code and GitHub Actions for continuous delivery. Finally, Argo configurations which include Event Sources, Sensors, Triggers, and Workflows are all set up using YAML files that can be applied through Kubectl or through Argo CLI. There are a number of Argo Events YAML Configuration Examples as well as Argo Workflow YAML Configuration Examples to get you started. The benefit of this design is that the entire reference architecture from the infrastructure, to the application, to the application setup and configuration can be completely automated supporting cloud native and DevOps principles.
+The illustrated deployment method for this architecture is via Terraform and Helm Charts. Linode supports Terraform through the Linode Provider and Argo supports Argo Events Helm Chart and Argo Workflow Helm Chart for application deployment. Our reference architecture also includes deployment automation using GitHub for source code and GitHub Actions for continuous delivery. Finally, Argo configurations which include Event Sources, Sensors, Triggers, and Workflows are all set up using YAML files that can be applied through kubectl or through Argo CLI. There are a number of Argo Events YAML Configuration Examples as well as Argo Workflow YAML Configuration Examples to get you started. The benefit of this design is that the entire reference architecture from the infrastructure, to the application, to the application setup and configuration can be completely automated supporting cloud native and DevOps principles.
1. Starting at the left-hand side of the diagram we have content creators with the ability to ingest files into Linode Object Storage which is used as the content landing point. Object storage can be set up to receive files from CLIs, programmatic integrations, and desktop tools such as Cyberduck. Supported upload methods are described in the Linode Object Storage documentation. Additionally Linode Object Storage supports lifecycle policies so that we can automatically purge source files regularly. A purging policy should only be implemented if a separate system-of-record for your high resolution source content is maintained.
@@ -52,7 +52,7 @@ Figure 3 is the built-out reference architecture that includes:
* A media processing lifecycle management workflow application built in Linode Kubernetes Engine using Argo Events and Argo Workflows
* Content distribution using Akamai CDN.
-The illustrated deployment method for this architecture is via Terraform and Helm Charts. Linode supports Terraform through the Linode Provider and Argo supports Argo Events Helm Chart and Argo Workflow Helm Chart for application deployment. Our reference architecture also includes deployment automation using GitHub for source code and GitHub Actions for continuous delivery. Finally, Argo configurations which include Event Sources, Sensors, Triggers, and Workflows are all set up using YAML files that can be applied through Kubectl or through Argo CLI. There are a number of Argo Events YAML Configuration Examples as well as Argo Workflow YAML Configuration Examples to get you started. The benefit of this design is that the entire reference architecture from the infrastructure, to the application, to the application setup and configuration can be completely automated supporting cloud native and DevOps principles.
+The illustrated deployment method for this architecture is via Terraform and Helm Charts. Linode supports Terraform through the Linode Provider and Argo supports Argo Events Helm Chart and Argo Workflow Helm Chart for application deployment. Our reference architecture also includes deployment automation using GitHub for source code and GitHub Actions for continuous delivery. Finally, Argo configurations which include Event Sources, Sensors, Triggers, and Workflows are all set up using YAML files that can be applied through kubectl or through Argo CLI. There are a number of Argo Events YAML Configuration Examples as well as Argo Workflow YAML Configuration Examples to get you started. The benefit of this design is that the entire reference architecture from the infrastructure, to the application, to the application setup and configuration can be completely automated supporting cloud native and DevOps principles.
1. Starting at the left-hand side of the diagram we have content creators with the ability to ingest files into Linode Object Storage which is used as the content landing point. Object storage can be set up to receive files from CLIs, programmatic integrations, and desktop tools such as Cyberduck. Supported upload methods are described in the Linode Object Storage documentation. Additionally Linode Object Storage supports lifecycle policies so that we can automatically purge source files regularly. A purging policy should only be implemented if a separate system-of-record for your high resolution source content is maintained.
diff --git a/ci/vale/dictionary.txt b/ci/vale/dictionary.txt
index b0f4c0dba0b..f39c3938016 100644
--- a/ci/vale/dictionary.txt
+++ b/ci/vale/dictionary.txt
@@ -87,6 +87,7 @@ arping
arptables
arthashastra
Asana
+ASCIIbetical
aspell
askbot
aske
@@ -316,6 +317,7 @@ clearsign
clearspace
cleartext
cli
+CLIs
clickboard
clickjacking
client1
@@ -335,10 +337,13 @@ clustermgr
clustermgrguest
cmd
cmdmod
+CMS
+CMSs
cmusieve
cnf
codebase
codec
+codefresh
Codeone
codepen
colab
@@ -391,6 +396,8 @@ cqlshrc
craftbukkit
craigslist
crashingdaily
+CRD
+CRDs
createdt
createfromstackscript
crewless
@@ -405,6 +412,9 @@ Crossplane
crosstab
crowdsourced
crowdsourcing
+CRDs
+CRM
+CRMs
crt
crypters
crypto
@@ -536,6 +546,8 @@ distros
dists
django
dkim
+DLC
+DLCs
dlncr
dmg
dmitriy
@@ -578,6 +590,7 @@ drush
drwxr
dshield
dsn
+DSPs
du
duf
ducati
@@ -597,6 +610,7 @@ ecto
edmonds
edu
efi
+EGroupware
ejabberd
el6
el7
@@ -717,6 +731,7 @@ flamegraphs
flatpress
florian
flowlens
+Fluentd
flyspray
Focalboard
foodcritic
@@ -728,6 +743,7 @@ fmt
fn
fpm
fqdns
+FQCNs
fragging
framebuffer
frakti
@@ -839,6 +855,7 @@ graphql
Grav
Gravatar
Gravatars
+Gravitee
graylog
graylog2
greenbone
@@ -851,6 +868,9 @@ groupinfo
grsecurity
gsa
gsad
+GSLT
+GSLTs
+GTalk
gtop
guacd
gui
@@ -946,6 +966,7 @@ hyperefficient
HyperLogLog
hyperparameter
hyperscaler
+hyperscalers
hypervisor
hypervisors
IaC
@@ -1037,6 +1058,8 @@ ipaserver
ipchains
ipconfig
iperf
+iPads
+iPhones
ips
ipsec
ipset
@@ -1057,6 +1080,7 @@ iso
isort
isoformat
isp
+ISPConfig
isps
issuewild
istioctl
@@ -1083,6 +1107,7 @@ jira
jitsi
jk
jks
+JMeter
jobd
johansson
joomla
@@ -1161,6 +1186,8 @@ Konqueror
konsole
kotin
konversation
+KPI
+KPIs
krita
kroll
ksmbd
@@ -1168,6 +1195,7 @@ ktor
kube
kubeadm
kubeconfig
+Kubecost
kubectl
kubectx
kubeflow
@@ -1254,6 +1282,7 @@ localdomain
localhost
localuser
lockdown
+lockfile
locustfile
locustfiles
lodash
@@ -1413,6 +1442,14 @@ mnesia
mngtmpaddr
moby
mod_autoindex
+mod_deflate
+mod_evasive
+mod_expires
+mod_perl
+mod_php
+mod_proxy
+mod_python
+mod_rewrite
moddable
modinfo
modsecurity
@@ -1447,6 +1484,7 @@ msgid
msgpack
msmtp
msps
+MSTs
MTAs
mtr
mtu
@@ -1463,6 +1501,7 @@ multiplatform
Multiplo
multiport
multiprotocol
+multirepo
multiset
multisite
multitenant
@@ -1639,6 +1678,7 @@ openvpn
openvz
openzipkin
Opin
+osCommerce
ossec
ostemplate
osx
@@ -1726,6 +1766,7 @@ photoshop
php
php5
php7
+phpFox
phpmyadmin
Phrack
phusion
@@ -1751,10 +1792,12 @@ plex
plone
pluggable
png
+pnpm
pocketmine
podman
poettering
pokemon
+polyrepo
popeye
pop3
pop3d
@@ -1936,6 +1979,7 @@ remediations
remi
remmina
remotehost
+Replibyte
reparseGDBs
replicants
replicaset
@@ -2141,8 +2185,10 @@ siri
sitename
Skitch
sklearn
+skopeo
slackpkg
slackware
+SLAs
slavedb
sls
smartcard
@@ -2166,6 +2212,7 @@ solaris
solr
somaxconn
someuser
+sonatype
sonicwall
sophos
spamassassin
@@ -2190,6 +2237,7 @@ src
srv
sshd
sshfs
+sshing
sshpass
ssi
ssl
@@ -2348,6 +2396,7 @@ tincd
tinydb
tiobe
titlebar
+TWiki
tl
tld
TLDs
@@ -2381,7 +2430,9 @@ traceroute
trackbar
traefik
transcode
+transcoded
transcoder
+transcodes
transcoding
transpiled
transpiling
@@ -2400,10 +2451,12 @@ tumblr
tun0
tune2fs
tunnelblick
+turborepo
turtl
tv
tw
twilio
+Tyk
typecheck
typeform
txt
@@ -2513,6 +2566,7 @@ variadic
varnishlog
varonis
vaultwarden
+vBulletin
vcs
vdev
vdevs
@@ -2648,6 +2702,7 @@ worker1
worker2
workgroup
wp
+WPSolr
wpuser
writability
writecaps
@@ -2669,6 +2724,7 @@ xen
xenial
xerus
Xfce
+XHProf
xkcd
xlsx
xlsxwriter
diff --git a/docs/guides/akamai/distributed-demand-side-platform/dsp-design-diagram.jpg b/docs/guides/akamai/distributed-demand-side-platform/dsp-design-diagram.jpg
new file mode 100644
index 00000000000..45a27a93139
Binary files /dev/null and b/docs/guides/akamai/distributed-demand-side-platform/dsp-design-diagram.jpg differ
diff --git a/docs/guides/akamai/distributed-demand-side-platform/index.md b/docs/guides/akamai/distributed-demand-side-platform/index.md
new file mode 100644
index 00000000000..02e7ca0ac02
--- /dev/null
+++ b/docs/guides/akamai/distributed-demand-side-platform/index.md
@@ -0,0 +1,88 @@
+---
+slug: distributed-demand-side-platform
+title: "Ad-Tech on Akamai: Distributed Demand-Side Platform"
+description: "Details and architectures demonstrating the ability to host a distributed demand-side platform on Akamai cloud computing."
+authors: ["Linode"]
+contributors: ["Linode"]
+published: 2024-05-07
+keywords: ['adtech','ad-tech','dsp']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+---
+
+Advertisers (and ad agencies) utilize demand-side platforms (DSPs) to engage in programmatic ad-buying. DSPs enable advertisers to configure their ad campaigns and place bids for ad inventory (ad space on a publisher’s website). As the volume of bids from advertisers and ad inventory from publishers increases, so does the complexity for processing bids and matching them to the available inventory.
+
+DSPs require performant and scalable cloud infrastructure to handle processing ad campaigns and submitting bids. In addition, latency within this infrastructure needs to be decreased at every step. Lower latency enables the ad to quickly load alongside the publisher's web page.
+
+This guide covers a DSP solution that decentralizes key infrastructure components, including the frontend servers and real-time bidding (RTB) servers. Moving this infrastructure closer to users, coupled with adding a robust caching system for ad inventory and user profiles, addresses the main challenges for ad serving, including low latency requirements and reduced egress fees.
+
+## Overcoming Challenges
+
+Like almost any specialized production workload, a distributed demand-side platform requires unique infrastructure considerations. Listed below are several challenges, each of which can be mitigated or minimized through thoughtfully designed architecture.
+
+### Latency Sensitivity
+
+*Identify sources of high latency and minimize the latency impact of those components.*
+
+Ad serving requires significantly lower latency compared with many other systems. An ad needs to be selected for - and displayed to - the end-user as quickly as possible. Even small increases to latency can have large negative impacts to customer SLAs and end-user conversion rates. In many cases, a centralized ad serving infrastructure is the primary cause of high latency, though you should identify any other components in your existing system that may also be contributing to high latency.
+
+The distributed nature of this solution brings key components closer to the end-user, reducing latency compared to more traditional, centralized systems. In addition, failover is provided for each region and occurs quickly to reduce downtime and minimize impact to latency.
+
+### Cost sensitivity (low profit margins)
+
+*Identify significant sources of infrastructure costs and determine ways to reduce those costs.*
+
+Due to the relatively small profit margins in the ad serving space, cloud infrastructure costs directly impact profitability. Reducing cloud costs plays a critical role in infrastructure planning.
+
+One major source of infrastructure costs is egress fees. By hosting a distributed ad serving system on Akamai’s Cloud Computing platform, egress costs can be eliminated or significantly reduced compared to hyperscalers. The solution in this guide led to a 40% reduction in costs for one profiled customer.
+
+Another cause of increased cloud costs and egress fees is a high amount of traffic. Decentralizing some components but not others can increase cross-region traffic when the distributed infrastructure is communicating with the centralized infrastructure. To limit egress fees associated with this traffic, a caching system should be implemented on the distributed systems. Using this method, local instances sync critical data (like ad inventory and bids) to reduce network traffic to the centralized cloud. Also, because of Akamai’s global network and relationship with other hyperscalers, egress costs when transferring data stored centrally on another hyperscaler is also eliminated or reduced.
+
+### Integration and Migration Effort
+
+*Consider the amount of effort that infrastructure changes will create and design an architecture that reduces the effort wherever possible.*
+
+When re-architecting an application, the amount of effort in designing and integrating the new systems --- as well as migrating to another provider --- can pose significant challenges. The solution in this guide requires moving only a part of the ad serving workflow to the Akamai platform, which means that many centralized components do not need to be migrated. The ad inventory system of record remains unchanged, as do other key databases. The amount of effort is greatly reduced compared to other possible solutions.
+
+## Infrastructure Design Diagram
+
+The diagram below showcases the infrastructure components that enable a multi-region DSP on Akamai cloud computing, while retaining existing centralized data systems. This includes components for routing requests to the region closest to the end-user (the advertisers and ad agencies), load balancing requests between multiple backend systems, caching a copy of any centralized data, and monitoring infrastructure for downtime.
+
+
+
+1. The clients (advertisers) interact with the DSP in order to configure campaigns and bid on inventory.
+
+1. The advertiser submits a bid for their ad to be displayed to an end-user. This takes the form of an HTTPS API request.
+
+1. The request is routed to one of multiple compute regions (data centers). Since this is a distributed application, the request is routed through an intelligent DNS-based load balancing solution, such as Akamai [Global Traffic Manager (GTM)](https://www.akamai.com/products/global-traffic-management). The global load balancer determines which region can best serve the client's request. This takes into account location, performance, and availability. A load balancing solution like this is an ideal way to reduce latency (improves ad display speed) and increase resilience (one failure doesn’t affect all the capacity).
+
+1. Local load balancers, such as a Compute Instance running HAProxy, route the request to one of several backend clusters. Multiple clusters are often used within a single region for redundancy and scaling. Orchestration platforms, such as [LKE](https://www.linode.com/products/kubernetes/), can be used to manage cluster infrastructure and operation.
+
+1. The frontend API gateway starts processing the request. These systems operate in front of the bidding servers to reduce egress and bid processing costs. These frontend servers are typically capable of applying business logic for communicating with ad-exchanges as well as bidding servers and any other microservices on the cluster.
+
+1. The frontend gateway sends the bid request to the bidding servers. During this bidding process, the bid is matched with local ad inventory and compared against other advertisers using the DSP’s platform.
+
+1. When processing the bid, a local caching server is queried. This cache includes data from any centralized infrastructure, such as databases for ad inventory, user profiles, and more. This eliminates the delay associated with querying the centralized system of record on every single request.
+
+ 1. The local cache is updated periodically. Data within the cache is refreshed to ensure the most up-to-date information is used.
+
+1. Information from the local cache is sent back to the bidding servers.
+
+1. The bid is processed by the bidding server. Here, it is determined if the bid is accepted (and will be delivered to ad exchanges or publishers) or rejected.
+
+1. The DSP notifies the advertiser of the results of the bid by updating the web interface.
+
+## Systems and Components
+
+- **Global load balancer.** Akamai [Global Traffic Manager (GTM)](https://www.akamai.com/products/global-traffic-management) can be used to route incoming requests to the region closest to the end-user.
+
+- **Local load balancer.** Within each region, a load balancer is used to balance traffic between multiple backend clusters. This load balancer should be redundant and should gracefully failover to a secondary load balancer to prevent downtime. Possible load balancing solutions include [NodeBalancers](https://www.linode.com/products/nodebalancers/), NGINX, and HAProxy.
+
+- **DSP ad serving cluster.** The ad serving clusters run on a container orchestration platform like [LKE](https://www.linode.com/products/kubernetes/) and are composed of multiple [Compute Instances](https://www.linode.com/products/dedicated-cpu/), with each running one or more of the specified components listed below. Multiple clusters should be used within each region to maintain redundancy and enable high-availability.
+
+ - **Frontend API Gateway.** This gateway proxies all API requests for other ad-related APIs hosted on the cluster. Possible API gateways include Kong, NGINX, Tyk, and Gravitee.
+
+ - **Real-time bidding server.** Hosts all logic that determines which bids are accepted or rejected. It is accessible by an API and requests will only come from the API gateway in that cluster.
+
+ - **Caching system.** Each cluster should include a local cache of any critical centralized ad-related databases, such as ad inventory and possibly user profiles. Possible local caching systems include Redis, Apache Ignite, Memcached, and Couchbase.
+
+- **Monitoring system.** To monitor the state of the load balancers and clusters in that region, a monitoring and/or logging system should be configured. Possible monitoring solutions include Prometheus, Grafana, and ThousandEyes.
\ No newline at end of file
diff --git a/docs/guides/akamai/get-started/parent-child-accounts/index.md b/docs/guides/akamai/get-started/parent-child-accounts/index.md
deleted file mode 100644
index ddfa0eed83c..00000000000
--- a/docs/guides/akamai/get-started/parent-child-accounts/index.md
+++ /dev/null
@@ -1,153 +0,0 @@
----
-slug: parent-child-accounts
-description: 'Learn how parent and child accounts can help Akamai partners manage multiple accounts.'
-license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
-modified: 2024-04-23
-keywords: ["akamai partners", "parent", "child", "parent/child relationship"]
-published: 2022-04-23
-title: Parent and Child Accounts for Akamai Partners
-aliases: ['/guides/akamai/get-started/parent-child-account/']
-authors: ["Linode"]
-contributors: ["Linode"]
-promo_default: false
----
-## Overview
-The parent and child accounts feature was designed with Akamai partners and their clients in mind. It allows partners to manage multiple end customers’ accounts with a single login to their company’s account.
-
-{{< note type="secondary" >}}
-This feature is only available for Akamai partners and their end customers with an Akamai Cloud Computing contract. To learn more about it, contact your Akamai representative.
-
-{{< /note >}}
-
-Depending on whether you're an Akamai partner or a partner’s end customer, this feature has different implications. Select your role to go to content that applies to you.
-- [I’m an Akamai partner](#for-akamai-partners).
-- [I’m a partner’s end customer](#for-end-customers).
-
-
-## For Akamai partners
-### About the feature
-
-The parent and child accounts feature allows you, an Akamai partner (parent account holder), to switch between and manage your end customers’ accounts (child accounts) in Cloud Manager.
-
-Child account users can monitor your actions on their account. All actions are logged and can be viewed in Cloud Manager, either through the Events panel or the [Events](https://cloud.linode.com/events) page.
-
-The parent/child relationship doesn’t restrict the child account and its users from creating support tickets or signing up for managed services for their accounts.
-
-### Terminology
-
-#### Parent account
-In the context of your account type, you need to be familiar with the following terms.
-
-- **Parent account**. Your, Akamai partner, account.
-- **Parent account user with full access** (parent account admin). An Akamai partner account owner who has access to child accounts by default. They manage permissions for other admins and child account users with limited access. They can provide full access to a parent account user with limited access who then also becomes a parent account admin.
-- **Parent account users with limited access** Users of a parent account that don’t have access to child accounts by default. Only the parent account admin can grant them access to all child accounts the parent account has a contractual parent/child relationship with. To learn more, see [Enable access to child accounts for parent account users with limited access](#enable-access-to-child-accounts-for-parent-account-users-with-limited-access).
-
-#### Parent/child relationship
-In the context of the parent/child relationship, you need to be familiar with the following terms.
-
-- **Parent account**. Your, Akamai partner, account.
-- **Child account parent user**. A single user on a given child account that represents all parent account users with access to child accounts. It’s used to manage the child account. Depending on the access level and permissions provided by the child account admin, it can be a child account parent user with either full or limited access. To learn more, see [Switch accounts](#switch-accounts).
-- **Child account**. An account belonging to your end customer.
-- **Child account users with full access** (child account admins). Users of the child account with full access to it. They manage permissions for all users on the account; other admins, child account users with limited access, and the child account parent user. The only exception is billing, child account admins have read-only access to it and they can’t modify the child account parent user’s read/write permission to it.
-- **Child account users with limited access**. Non-admin users of the child account whose permissions are managed by the child account admin.
-
-One parent account can manage many child accounts.
-
-### Enable access to child accounts for parent account users with limited access
-
-As a parent account admin, you can grant access to the child account for parent account users with limited access.
-
-On the User Permissions page, you have the additional general permission - **Enable child account access**.
-- This permission is available only for parent accounts.
-- Parent account users with limited access have this option disabled by default. They need to ask a parent account admin to enable this option for them.
-- It enables access to **all** child accounts the parent account has a contractual parent/child relationship with.
-
-To enable the access:
-1. Log in to [Cloud Manager](https://cloud.linode.com/).
-1. In the main menu, go to **Account**.
-1. In the **User & Grants** tab, next to the name of a parent account user with limited access, click **...** > **User Permissions**.
-1. In the **General Permissions** section, switch the **Enable child account access** option on. Click **Save**.
-
-### Switch accounts
-
-When switching to a child account, you become the child account parent user. By default, you have:
-- Read/write permission for billing. This is the only permission that can’t be modified even by a child account admin.
-- Read-only permission to all asset instances existing at the time of parent/child relationship creation, such as linodes, load balancers, longview clients. This allows the child account parent user to see existing child accounts’ assets, but not create, update, or delete them.
-- No access to asset instances created after the creation of the parent/child relationship. The child account parent user needs to ask a child account admin to grant them access to new instances if necessary.
-- The [global permissions](/docs/products/platform/accounts/guides/user-permissions/) set off.
-
-A child account admin can change the child account parent user’s default permissions and expand or restrict its access to the child account, except for billing. If the child account admin provides the child account parent user with full access to the child account, the child account parent user can fully manage the account, including its users.
-
-To switch accounts:
-1. Log in to [Cloud Manager](https://cloud.linode.com/).
-1. In the top corner, click the name of your account.
-
- 
-
-1. Click **Switch account**. If you don’t see this button, it means you’re a parent account user with limited access and you need to ask the parent account admin to provide you the access to child accounts or your account is not involved in the parent/child relationship.
-
- 
-
-1. Select an account from the list you want to switch to.
-Now you act as the child account parent user. Note that you can click the **switch back to your account** link to switch back.
-
- 
-
-### Delete a parent account
-A parent account can be deleted only if all parent/child relationships are contractually removed from your parent account, meaning you don’t have active contracts with end customers.
-
-Once this condition is met, to close an account:
-
-1. Log in to [Cloud Manager](https://cloud.linode.com/).
-1. In the main menu, go to **Account**.
-1. In the **Settings** tab, go to the **Close Account** section, and click the **Close Account** button.
-1. Follow the on-screen instructions and click **Close Account**.
-
-## For end customers
-
-### About the feature
-The parent and child accounts feature allows an Akamai partner (Parent account holder) to manage your account (Child account) in Cloud Manager.
-
-You can monitor Akamai partner’s actions on your account. All actions are logged and can be viewed in Cloud Manager, either through the Events panel or the [Events](https://cloud.linode.com/events) page.
-
-The parent/child relationship doesn’t restrict your account and its users from creating support tickets or signing up for managed services.
-
-### Terminology
-In the context of the parent/child relationship, you need to be familiar with the following terms.
-
-
-- **Parent account**. A company account of your Akamai partner.
-- **Child account**. Your, end customer’s, account.
-- **Child account users with full access** (child account admins). Users of your account with full access to it. They manage permissions for all users on the account; other admins, child account users with limited access, and the child account parent user. The only exception is billing, they have read-only access to it and they can’t modify the child account parent user’s read/write permission to it.
-- **Child account users with limited access**. Non-admin users of your account whose permissions are managed by the child account admin.
-- **Child account parent user**. A single user representing all Akamai partner users who have access to this child account. It’s used to manage your child account. Depending on the access level and permissions provided by the child account admin, it can be a child account parent user with either full or limited access. To learn more, see [Manage account access and permissions for the child account parent user](#manage-account-access-and-permissions-for-the-child-account-parent-user). The child account parent user is created automatically based on your contract with an Akamai partner. The user exists on your account as long as you have a contractual relationship with your Akamai partner.
-
-### Manage account access and permissions for the child account parent user
-
-By default, a child account parent user user has:
-- Read/write permission for billing. This is the only permission that can’t be modified even by a child account admin.
-- Read-only permission to all asset instances existing at the time of parent/child relationship creation, such as linodes, load balancers, longview clients. This allows the child account parent user to see existing child accounts’ assets, but not create, update, or delete them.
-- No access to asset instances created after the creation of the parent/child relationship. The child account parent user needs to ask a child account admin to grant them access to new instances if necessary.
-- The [global permissions](/docs/products/platform/accounts/guides/user-permissions/) set off.
-
-As a child account admin you can change the default permissions of the child account parent user to expand or restrict access as you see fit, except for billing. If you provide the child account parent user with full access to the child account, it becomes the child account parent user with full access and can fully manage your child account, including its users.
-
-To manage access settings:
-1. Log in to [Cloud Manager](https://cloud.linode.com/).
-1. In the main menu, go to **Account**.
-1. In the **User & Grants** tab, in the **Parent User Settings** table, next to the name of the child account parent user, click **Manage Access**.
-1. In the **General Permissions** section:
- 1. Select the permissions you want to grant. Remember that permissions with the $ symbol next to them, can incur additional charges. Note also the **Full Account Access** switch that gives the child account parent user full access to your account.
- 1. Click **Save**.
-1. In the **Specific Permissions** section, select the access level for each feature or use the **Set all permissions to** dropdown. Click **Save**.
-
-### Delete a child account
-A child account admin can delete a child account only if the parent/child relationship is contractually removed, meaning you don’t have an active contract with an Akamai partner.
-
-Once this condition is met, to close an account:
-
-1. Log in to [Cloud Manager](https://cloud.linode.com/).
-1. In the main menu, go to **Account**.
-1. In the **Settings** tab, go to the **Close Account** section, and click the **Close Account** button.
-1. Follow the on-screen instructions and click **Close Account**.
-
diff --git a/docs/guides/akamai/get-started/parent-child-accounts/parent_child_term.png b/docs/guides/akamai/get-started/parent-child-accounts/parent_child_term.png
deleted file mode 100644
index 130cf66b6d8..00000000000
Binary files a/docs/guides/akamai/get-started/parent-child-accounts/parent_child_term.png and /dev/null differ
diff --git a/docs/guides/akamai/get-started/parent-child-accounts/parent_switch.png b/docs/guides/akamai/get-started/parent-child-accounts/parent_switch.png
deleted file mode 100644
index a0b45cac711..00000000000
Binary files a/docs/guides/akamai/get-started/parent-child-accounts/parent_switch.png and /dev/null differ
diff --git a/docs/guides/akamai/get-started/parent-child-accounts/parent_term.png b/docs/guides/akamai/get-started/parent-child-accounts/parent_term.png
deleted file mode 100644
index 6e9836e1d71..00000000000
Binary files a/docs/guides/akamai/get-started/parent-child-accounts/parent_term.png and /dev/null differ
diff --git a/docs/guides/akamai/get-started/parent-child-accounts/switch_account.png b/docs/guides/akamai/get-started/parent-child-accounts/switch_account.png
deleted file mode 100644
index b9207be8f4b..00000000000
Binary files a/docs/guides/akamai/get-started/parent-child-accounts/switch_account.png and /dev/null differ
diff --git a/docs/guides/akamai/get-started/parent-child-accounts/switch_back.png b/docs/guides/akamai/get-started/parent-child-accounts/switch_back.png
deleted file mode 100644
index 95eae9a2c93..00000000000
Binary files a/docs/guides/akamai/get-started/parent-child-accounts/switch_back.png and /dev/null differ
diff --git a/docs/guides/akamai/get-started/vod-transcoding-ott-akamai-cloud-computing/index.md b/docs/guides/akamai/get-started/vod-transcoding-ott-akamai-cloud-computing/index.md
new file mode 100644
index 00000000000..8672635267c
--- /dev/null
+++ b/docs/guides/akamai/get-started/vod-transcoding-ott-akamai-cloud-computing/index.md
@@ -0,0 +1,60 @@
+---
+slug: vod-transcoding-ott-akamai-cloud-computing
+title: "VOD Transcoding for OTT Media with Akamai Cloud Computing"
+description: "This guide outlines design requirements for a video on demand (VOD) transcoding workflow for an over-the-top (OTT) media service on Akamai Cloud Computing."
+authors: ["Linode"]
+contributors: ["Linode"]
+published: 2024-05-06
+keywords: ['video transcoding','video on demand', 'vod', 'over-the-top media service', 'ott']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+---
+
+Video on demand (VOD) streaming services rely on the transcoding of video streams to efficiently distribute content. In transcoding workflows, video is converted to formats that are suited to the network and device constraints where they are viewed. Video transcoding is a compute-intensive process, so maximizing the number of video streams that can be transcoded on available hardware is a primary consideration. Transcoding efficiency can vary between the compute offerings of different infrastructure providers, and evaluations of transcoding performance should be performed when selecting a cloud infrastructure platform.
+
+Streaming services are also latency-sensitive, and the geographic location of the transcoding service affects the latency of the stream. Choosing a location closer to viewers can reduce latency, so being able to run the service in compute regions that are near your audience is important.
+
+This guide outlines a transcoding architecture that supports an over-the-top (OTT) media platform, and the architecture has been implemented and proven by a profiled Akamai customer. This profiled customer delivers live TV channels, on-demand content, and catch-up TV services to viewers across the globe. This implementation allowed for significantly reduced egress costs while retaining competitive transcoding performance when compared to transcoding solutions from hyperscaler cloud platforms.
+
+## Video on Demand Transcoding Workflow
+
+At a high level, video is handled by a transcoding service with the following workflow:
+
+1. Video content is ingested into the transcoding service from an intermediary storage location (often an object storage bucket).
+1. The video transcoding service transcodes the stream into desired video formats.
+1. A content delivery network accepts the transcoded video and distributes it to platform audiences.
+
+## Overcoming Challenges
+
+### Cost Sensitivity
+
+*Identify significant sources of infrastructure costs and determine ways to reduce those costs.*
+
+Because video transcoding is a compute-intensive process, compute resources are a primary source of infrastructure cost for streaming services. It’s important to select compute hardware that is performant for the software that is run by the transcoding service. It’s also important to test example transcoding workflows on competing cloud infrastructure platforms and measure the transcoding efficiency of each. This can be done by selecting cost-comparable compute instances between platforms. Transcoding tests are run on each comparable instance and the number of parallel streams that can be achieved for each is measured.
+
+After a video stream is transcoded by the transcoding service, it needs to be distributed by a content delivery network (CDN). This can also be a significant source of cost when there are egress fees between the transcoding service platform and the content delivery network. By selecting Akamai compute offerings for the transcoding service alongside Akamai’s CDN for content delivery, the egress fees for that traffic can be reduced by 100%.
+
+### Latency Sensitivity
+
+*Identify sources of high latency and minimize the latency impact of those components.*
+
+Low latency is critical for video streaming services. To enable low latency, transcoding services should be located near their audiences. By working with a cloud infrastructure platform that offers a wide selection of regions in different geographies, you can ensure the proximity of your transcoding service as your business expands into new areas. Akamai’s global footprint of compute regions supports reach and expansion into new audiences.
+
+## Video on Demand Transcoding Design Diagram
+
+This solution creates a video transcoding service on the Akamai Cloud Computing platform. The cloud transcoding service is composed of multiple compute instances working in parallel to handle the transcoding load. Object storage locations store content uploaded to the transcoding service and videos that have been transcoded. Transcoded video streams are distributed by the Akamai CDN to audiences.
+
+
+
+1. Raw live or on-demand videos are uploaded to an object storage location that houses incoming videos that require processing.
+1. This location is monitored by the transcoding cluster for any new uploads.
+1. Uploaded video streams are transcoded by virtual machines in the transcoding cluster into desired output formats. These transcoded video streams are uploaded to object storage.
+1. A content delivery network distributes the video to viewers' devices, using the object storage location from the previous step as the content origin.
+1. An infrastructure automation API allows the transcoding cluster infrastructure to be managed by application developers. Updates to the cluster's hardware and software can be deployed with these APIs.
+
+### Systems and Components
+
+- **Content upload storage**: An object storage location that stores content uploads that require transcoding.
+- **Video transcoding cluster**: A cluster of compute instances that transcodes the uploaded videos into the desired formats.
+- **Transcoding output storage/distribution origin**: An object storage location that stores transcoded videos.
+- **Content delivery network**: Used to cache, distribute, and control access to video library.
+- **Control API**: An API used by an application team to manage and maintain the video transcoding service infrastructure.
\ No newline at end of file
diff --git a/docs/guides/akamai/get-started/vod-transcoding-ott-akamai-cloud-computing/vod-design-diagram.jpg b/docs/guides/akamai/get-started/vod-transcoding-ott-akamai-cloud-computing/vod-design-diagram.jpg
new file mode 100644
index 00000000000..c1e8ec04d7e
Binary files /dev/null and b/docs/guides/akamai/get-started/vod-transcoding-ott-akamai-cloud-computing/vod-design-diagram.jpg differ
diff --git a/docs/guides/applications/configuration-management/ansible/getting-started-with-ansible/index.md b/docs/guides/applications/configuration-management/ansible/getting-started-with-ansible/index.md
index e0fbff022c7..c13578cec9f 100644
--- a/docs/guides/applications/configuration-management/ansible/getting-started-with-ansible/index.md
+++ b/docs/guides/applications/configuration-management/ansible/getting-started-with-ansible/index.md
@@ -6,7 +6,7 @@ description: "In this guide, we'll show you how to use Ansible to perform basic
authors: ["Joshua Lyman"]
contributors: ["Joshua Lyman"]
published: 2018-03-21
-modified: 2019-06-19
+modified: 2024-05-06
keywords: ["ansible", "ansible configuration", "ansible provisioning", "ansible infrastructure", "ansible automation", "ansible configuration change management", "ansible server automation"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
aliases: ['/applications/ansible/getting-started-with-ansible/','/applications/configuration-management/getting-started-with-ansible/','/applications/configuration-management/ansible/getting-started-with-ansible/']
@@ -17,7 +17,7 @@ external_resources:
tags: ["automation"]
---
-
+
## What is Ansible?
@@ -46,11 +46,12 @@ This guide introduces the basics of installing Ansible and preparing your enviro
- Install and configure Ansible on your computer or a Linode to serve as the control node that will manage your infrastructure nodes.
- Create two Linodes to manage with Ansible and establish a basic connection between the control node and your managed nodes. The managed nodes will be referred to as `node-1`, and `node-2` throughout the guide.
- {{< note respectIndent=false >}}
-The examples in this guide provide a manual method to establish a basic connection between your control node and managed nodes as a way to introduce the basics of Ansible. If you would like to learn how to use Ansible's [Linode module](https://docs.ansible.com/ansible/latest/modules/linode_v4_module.html) to automate deploying and managing Linodes, see the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/). The guide assumes familiarity with Ansible modules, Playbooks, and dynamic inventories.
+ {{< note >}}
+ The examples in this guide provide a manual method to establish a basic connection between your control node and managed nodes as a way to introduce the basics of Ansible. If you would like to learn how to use Ansible's [Linode module](https://docs.ansible.com/ansible/latest/modules/linode_v4_module.html) to automate deploying and managing Linodes, see the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/). The guide assumes familiarity with Ansible modules, Playbooks, and dynamic inventories.
{{< /note >}}
## Before You Begin
+
{{< note type="alert" >}}
This guide's example instructions will create up to three billable Linodes on your account. If you do not want to keep using the example Linodes that you create, be sure to [delete them](#delete-a-cluster) when you have finished the guide.
@@ -73,8 +74,8 @@ If you remove the resources afterward, you will only be billed for the hour(s) t
Repeat this procedure for each remaining node.
- {{< note respectIndent=false >}}
-This step can be automated by using Ansible's Linode module. See the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/) for more information.
+ {{< note >}}
+ This step can be automated by using Ansible's Linode module. See the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/) for more information.
{{< /note >}}
## Set up the Control Node
@@ -145,8 +146,6 @@ This guide was created using Ansible 2.8.
## Configure Ansible
-By default, Ansible's configuration file location is `/etc/ansible/ansible.cfg`. In most cases, the default configurations are enough to get you started using Ansible. In this example, you will use Ansible's default configurations.
-
1. To view a list of all current configs available to your control node, use the `ansible-config` command line utility.
```command
@@ -172,6 +171,15 @@ By default, Ansible's configuration file location is `/etc/ansible/ansible.cfg`.
default: false
...
```
+ {{< note >}}
+ To make advanced configurations, you will need to edit the `ansible.cfg` file which can be generated using the following command:
+
+ ```command
+ ansible-config init --disabled > ansible.cfg
+ ```
+
+ In some installations, this file will already be available in the `/etc/ansible/` directory.
+ {{< /note >}}
### Create an Ansible Inventory
@@ -191,14 +199,14 @@ Following the example below, you will add your **managed nodes** to the `/etc/an
Each bracketed label denotes an Ansible [group](http://docs.ansible.com/ansible/latest/intro_inventory.html#hosts-and-groups). Grouping your nodes by function will make it easier to run commands against the correct set of nodes.
- {{< note respectIndent=false >}}
-The `/etc/ansible` directory will not exist by default in some environments. If you find that this is the case, create it manually with the following command:
+ {{< note >}}
+ The `/etc/ansible` directory will not exist by default in some environments. If you find that this is the case, create it manually with the following command:
-```command
-mkdir /etc/ansible/
-```
+ ```command
+ mkdir /etc/ansible/
+ ```
-If you are using a non-standard SSH port on your nodes, include the port after a colon on the same line within your hosts file (`203.0.113.1:2222`).
+ If you are using a non-standard SSH port on your nodes, include the port after a colon on the same line within your hosts file (`203.0.113.1:2222`).
{{< /note >}}
## Connect to your Managed Nodes
@@ -248,4 +256,4 @@ After configuring your control node, you can communicate with your managed nodes
### Delete Your Linodes
-If you no longer wish to use the Linodes created in this guide, you can delete them using the [Linode Cloud Manager](https://cloud.linode.com/linodes). To learn how to remove Linode resources using Ansible's Linode module, see the [Delete Your Resources](/docs/guides/deploy-linodes-using-ansible/#delete-your-resources) section of the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/) guide.
+If you no longer wish to use the Linodes created in this guide, you can delete them using the [Linode Cloud Manager](https://cloud.linode.com/linodes). To learn how to remove Linode resources using Ansible's Linode module, see the [Delete Your Resources](/docs/guides/deploy-linodes-using-ansible/#delete-your-resources) section of the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/) guide.
\ No newline at end of file
diff --git a/docs/guides/applications/configuration-management/ansible/running-ansible-playbooks/index.md b/docs/guides/applications/configuration-management/ansible/running-ansible-playbooks/index.md
index e354d57f3a8..c7d95d51eea 100644
--- a/docs/guides/applications/configuration-management/ansible/running-ansible-playbooks/index.md
+++ b/docs/guides/applications/configuration-management/ansible/running-ansible-playbooks/index.md
@@ -28,7 +28,7 @@ This guide provides an introduction to Ansible Playbook concepts, like tasks, pl
* Install Ansible on your computer or a Linode following the steps in the [Set up the Control Node](/docs/guides/getting-started-with-ansible/#set-up-the-control-node) section of our [Getting Started With Ansible](/docs/guides/getting-started-with-ansible/) guide.
-* Deploy a Linode running Debian 9 to manage with Ansible. All Playbooks created throughout this guide will be executed on this Linode. Follow the [Getting Started With Ansible - Basic Installation and Setup](/docs/guides/getting-started-with-ansible/#set-up-the-control-node) to learn how to establish a connection between the Ansible control node and your Linode.
+* Deploy a Linode running Ubuntu 22.04 LTS to manage with Ansible. All Playbooks created throughout this guide will be executed on this Linode. Follow the [Getting Started With Ansible - Basic Installation and Setup](/docs/guides/getting-started-with-ansible/#set-up-the-control-node) to learn how to establish a connection between the Ansible control node and your Linode.
{{< note respectIndent=false >}}
When following the [Getting Started with Ansible](/docs/guides/getting-started-with-ansible/#set-up-the-control-node) guide to deploy a Linode, it is not necessary to add your Ansible control node's SSH key-pair to your managed Linode. This step will be completed using a Playbook later on in this guide.
@@ -101,7 +101,7 @@ When creating a limited user account you are required to create a host login pas
[Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html#encrypt-string-for-use-in-yaml) can also be used to encrypt sensitive data. This guide will not make use of Ansible Vault, however, you can consult the [How to use the Linode Ansible Module to Deploy Linodes](/docs/guides/deploy-linodes-using-ansible/) guide to view an example that makes use of this feature.
{{< /note >}}
-1. On your Ansible control node, create a password hash on your control node for Ansible to use in a later step. An easy method is to use Python's PassLib library, which can be installed with the following commands:
+1. On your Ansible control node, create a password hash for Ansible to use in a later step. An easy method is to use Python's PassLib library, which can be installed with the following commands:
1. Install pip, the package installer for Python, on your control node if you do not already have it installed:
@@ -124,12 +124,16 @@ $6$rounds=656000$dwgOSA/I9yQVHIjJ$rSk8VmlZSlzig7tEwIN/tkT1rqyLQp/S/cD08dlbYctPjd
#### Disable Host Key Checking
-Ansible uses the sshpass helper program for SSH authentication. This program is included by default on Ansible 2.8. sshpass requires host key checking to be disabled on your Ansible control node.
+Ansible uses the sshpass helper program for SSH authentication.
-1. Disable host key checking. Open the `/etc/ansible/ansible.cfg` configuration file in a text editor of your choice, uncomment the following line, and save your changes.
+1. Ensure sshpass is installed on your control node:
+
+ sudo apt-install sshpass
+
+1. sshpass requires host key checking to be disabled on your Ansible control node. Open the `/etc/ansible/ansible.cfg` configuration file in a text editor of your choice, change the following value to "False", and save your changes.
{{< file "/etc/ansible/ansible.cfg" ini >}}
-#host_key_checking = False
+;host_key_checking=False
{{< /file >}}
@@ -141,7 +145,7 @@ In order to target your Linode in a Playbook, you will need to add it to your An
{{< file "/etc/ansible/hosts" ini >}}
[webserver]
-192.0.2.0
+192.0.2.17
{{< /file >}}
@@ -215,7 +219,7 @@ This next Playbook will take care of some common server setup tasks, such as set
line="{{ hostvars[item].ansible_default_ipv4.address }} {{ LOCAL_FQDN_NAME }} {{ LOCAL_HOSTNAME }}"
state=present
when: hostvars[item].ansible_default_ipv4.address is defined
- with_items: "{{ groups['linode'] }}"
+ with_items: "{{ groups['webserver'] }}"
- name: Update packages
apt: update_cache=yes upgrade=dist
{{< /file >}}
@@ -264,7 +268,7 @@ In order to avoid using plain text passwords in your Playbooks, you can use [Ans
pkg:
- apache2
- mysql-server
- - python-mysqldb
+ - python3-mysqldb
- php
- php-pear
- php-mysql
@@ -277,13 +281,16 @@ In order to avoid using plain text passwords in your Playbooks, you can use [Ans
- mysql
- name: Create a test database
- mysql_db: name= testDb
- state= present
+ community.mysql.mysql_db:
+ name: testDb
+ state: present
- name: Create a new user for connections
- mysql_user: name=webapp
- password='$6$rounds=656000$W.dSl'
- priv=*.*:ALL state=present
+ community.mysql.mysql_user:
+ name: webapp
+ password: 'yourpassword'
+ priv: '*.*:ALL'
+ state: present
{{< /file >}}
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/index.md b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/index.md
new file mode 100644
index 00000000000..c2ded5f5246
--- /dev/null
+++ b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/index.md
@@ -0,0 +1,454 @@
+---
+slug: build-a-cloud-native-private-registry-with-quay
+title: "How to Build a Cloud Native Private Registry With Quay"
+description: 'Learn how to create your own cloud-native private registry using Quay. This guide covers everything from setup to deployment on a CentOS Stream instance.'
+authors: ["John Mueller"]
+contributors: ["John Mueller"]
+published: 2024-05-02
+keywords: ['build cloud-native container registry with quay','red hat quay','centos stream','private container registry','cloud-native registry','secure private registry']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+---
+
+Docker doesn’t provide long term storage or image distribution capabilities, so developers need something more. [Docker Registry](https://docs.docker.com/registry/) performs these tasks, and using it guarantees the same application runtime environment through virtualization. However, building an image can involve a significant time investment, which is where [Quay](https://www.redhat.com/en/resources/quay-datasheet) (pronounced *kway*) comes in. A registry like Quay can both build and store containers. You can then deploy these containers in a shorter time and with less effort than using Docker Registry. This guide explains how Quay can be an essential part of the development process and details how to deploy a Quay registry.
+
+## What is Red Hat Quay?
+
+Red Hat Quay is a fault-tolerant and highly reliable registry equipped with the functionality needed to work in large-scale environments. Quay provides a purpose-built, centralized, and scalable registry platform that functions in a multi-cluster environment spanning multiple sites. Quay also analyzes container images for security vulnerabilities before you run them. This ensures that deployments spanning geographically separated areas don’t suffer from various forms of executable corruption. Also part of the security functionality, Quay offers granular access control. This means that developers working on a project adhere to the [principle of least privilege](https://www.paloaltonetworks.com/cyberpedia/what-is-the-principle-of-least-privilege), yet still have the rights needed to collaborate on tasks.
+
+## Quay Features
+
+Quay provides a wealth of features, broken down into the following categories:
+
+- **Security**:
+ - Secure container storage that provides access and authentication settings.
+ - Scans containers for added security.
+ - Continuously scans image content and provides reports on potential vulnerability issues.
+ - Uses existing authentication providers that rely on Lightweight Directory Access Protocol (LDAP) or OpenID Connect (OIDC).
+ - Logs and audits every security-related event in the system using long-term log storage.
+
+- **Flexibility**:
+ - Uses fine-grain access rules which allow you to isolate different user groups or enable collaboration between groups as needed.
+ - Allows a project to start small and scale to a much larger size without major project changes.
+ - Supports geographically distributed deployment with a single client entry point to boost performance.
+ - Provides a transparent cache of images stored in other container registries.
+ - Works with both cloud and offline environments, or a combination of the two.
+ - Incorporates support for a range of object storage services and third-party database systems.
+
+- **Developer Productivity**:
+ - Reduces the amount of work needed to build and deploy new containers.
+ - Makes it easier to manage storage growth through quota management.
+ - Provides source code management integration using simplified Continuous Integration (CI) pipelines.
+ - Maintains a "time machine" feature that protects against accidental deletion.
+
+Often omitted in reviews of Quay is that it works with more than just Docker. It also works with products like [Rancher](https://www.rancher.com/), [Hyper-V](https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview), [Codefresh](https://codefresh.io/), and [Skopeo](https://github.com/containers/skopeo).
+
+### When Should You Use Quay?
+
+Quay can present some issues when working with it for the first time. For example, configuring the security features can be both time-consuming and error-prone. One of the biggest issues is that Quay isn’t really a single product, and knowing which flavor of Quay to choose can be confusing. Here is a quick overview of the various Quay flavors:
+
+- [**Project Quay**](https://www.projectquay.io/): This is the standalone, open source, container registry that is comparable to [Sonatype](https://www.sonatype.com/), [Nexus Repository OSS](https://www.sonatype.com/products/sonatype-nexus-oss), or [Harbor](https://goharbor.io/).
+- [**Red Hat Quay.io**](https://quay.io/plans/): This is the enterprise-level version of Quay that is priced by the number of private repositories you create. However, public repositories are free.
+- [**Red Hat Quay**](https://www.redhat.com/en/technologies/cloud-computing/quay): This is the enterprise-level version available through [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) for use in creating private repositories.
+
+First, you need to choose the right flavor of Quay for the kind of project you want to create. Next, configure the Quay environment correctly before you begin using it. You want to limit use of Project Quay to experimentation or small projects, while Red Hat Quay is more appropriate for huge projects.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. Use a minimum of a Linode 4 GB plan to create a Quay setup on CentOS Stream. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+1. This guide used Docker to run Quay containers. To install Docker, follow the instructions in our [Installing and Using Docker on CentOS and Fedora](/docs/guides/installing-and-using-docker-on-centos-and-fedora/) guide through the *Managing Docker with a Non-Root User* section. Verify that Docker is ready for use with the `docker version` command. This guide uses [Docker Community Edition (CE) 24.0.7](https://www.ibm.com/docs/en/db2/11.5?topic=docker-downloading-installing-editions), but newer versions and the Docker Enterprise Edition (EE) should work.
+
+1. This guide uses PostgreSQL database for Quay's required long-term metadata storage (this database isn’t used for images). To install PostgreSQL, follow our [Install and Use PostgreSQL on CentOS 8](/docs/guides/centos-install-and-use-postgresql/) guide (this guide also works with CentOS Stream 9) up until the *Using PostgreSQL* section. Make sure you configure PostgreSQL to start automatically after a server restart.
+
+ {{< note type="warning" >}}
+ Avoid using MariaDB for your installation because the use of MariaDB is deprecated in recent versions of Quay. After installing and securing PostgreSQL, create a Quay database.
+ {{< /note >}}
+
+1. Quay uses Redis for short-term storage of real time events. To install Redis, follow our [Install and Configure Redis on CentOS 7](/docs/guides/install-and-configure-redis-on-centos-7/) guide (it also works with CentOS Stream 8 and 9) through the *Verify the Installation* section.
+
+{{< note >}}
+This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## Creating a Quay Setup on Top of CentOS Stream on a Server
+
+This section walks through creating a small Quay setup to use for experimentation or a small project. Perform a system update before continuing.
+
+## Deploying a Database
+
+Follow the steps below to create and configure a PostgreSQL database for Quay:
+
+1. Open a `psql` prompt using the `postgres` administrative account:
+
+ ```command
+ sudo -u postgres psql
+ ```
+
+ If prompted, provide the password you supplied when securing PostgreSQL:
+
+ ```output
+ postgres=#
+ ```
+
+1. Create a new example `quay_registry` database:
+
+ ```command
+ CREATE DATABASE quay_registry;
+ ```
+
+ ```output
+ CREATE DATABASE
+ ```
+
+1. Verify the database is present:
+
+ ```command
+ \l
+ ```
+
+ ```output
+ List of databases
+ Name | Owner | Encoding | Collate | Ctype | Access privileges
+ ---------------+----------+----------+-------------+-------------+-----------------------
+ postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
+ quay_registry | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
+ template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
+ | | | | | postgres=CTc/postgres
+ template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
+ | | | | | postgres=CTc/postgres
+ (4 rows)
+ ```
+
+1. Create a new example `quay_registry` user and provide it with a password:
+
+ ```command
+ CREATE USER quay_registry WITH encrypted password 'EXAMPLE_PASSWORD';
+ ```
+
+ ```output
+ CREATE ROLE
+ ```
+
+1. Ensure that the Quay user is present:
+
+ ```command
+ \du
+ ```
+
+ ```output
+ List of roles
+ Role name | Attributes | Member of
+ ---------------+------------------------------------------------------------+-----------
+ postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
+ quay_registry | | {}
+ ```
+
+1. Grant the Quay user rights to the Quay database:
+
+ ```command
+ GRANT ALL PRIVILEGES ON DATABASE quay_registry TO quay_registry;
+ ```
+
+ ```output
+ GRANT
+ ```
+
+1. Verify that the rights are in place:
+
+ ```command
+ \l quay_registry
+ ```
+
+ ```output
+ List of databases
+ Name | Owner | Encoding | Collate | Ctype | Access privileges
+ ---------------+----------+----------+-------------+-------------+----------------------------
+ quay_registry | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
+ | | | | | postgres=CTc/postgres +
+ | | | | | quay_registry=CTc/postgres
+ (1 row)
+ ```
+
+1. Enter the `quay_registry` database:
+
+ ```command
+ \c quay_registry
+ ```
+
+ ```output
+ You are now connected to database "quay_registry" as user "postgres".
+ ```
+
+1. Install the `pg_tgrm` extension:
+
+ ```command
+ CREATE EXTENSION pg_trgm;
+ ```
+
+ ```output
+ CREATE EXTENSION
+ ```
+
+1. Verify that the `pg_trgm` extension is installed:
+
+ ```command
+ SELECT * FROM pg_extension WHERE extname = 'pg_trgm';
+ ```
+
+ ```output
+ oid | extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition
+ -------+---------+----------+--------------+----------------+------------+-----------+--------------
+ 16396 | pg_trgm | 10 | 2200 | t | 1.5 | |
+ (1 row)
+ ```
+
+1. Exit the `psql` shell:
+
+ ```command
+ \q
+ ```
+
+1. Open the `pg_hba.conf` file, normally located at `/var/lib/pgsql/data/`, in a text editor with administrative privileges:
+
+ ```command
+ sudo nano /var/lib/pgsql/data/pg_hba.conf
+ ```
+
+ Modify the `pg_hba.conf` file to allow for remote connections by editing the `# IPv4 local connections:` line to appear like the following:
+
+ ```file {title="/var/lib/pgsql/data/pg_hba.conf" lang="aconf"}
+ # IPv4 local connections:
+ host all all 0.0.0.0/0 md5
+ ```
+
+ When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Now open the `postgresql.conf` file, normally located in `/var/lib/pgsql/data/`:
+
+ ```command
+ sudo nano /var/lib/pgsql/data/postgresql.conf
+ ```
+
+ Modify the `postgresql.conf` file to listen on all addresses by modifying the `listen_addresses` line like the following:
+
+ ```file {title="/var/lib/pgsql/data/postgresql.conf" lang="aconf"}
+ listen_addresses = '*'
+ ```
+
+ When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Restart the server for these changes to take effect:
+
+ ```command
+ sudo systemctl restart postgresql
+ ```
+
+### Configuring Redis
+
+Perform the following additional Redis configuration tasks for Quay.
+
+1. Create a Redis configuration file:
+
+ ```command
+ sudo nano /etc/redis/redis.conf
+ ```
+
+ Comment out the line that reads `bind 127.0.0.1 ::1` and add a new line underneath containing `bind 0.0.0.0`:
+
+ ```file { title="/etc/redis/redis.conf" lang="aconf" hl_lines="2,3"}
+ #bind 127.0.0.1 -::1
+ bind 0.0.0.0
+ ```
+
+ When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`:
+
+1. Ensure the change take effect:
+
+ ```command
+ sudo systemctl restart redis
+ ```
+
+1. Confirm that Redis is ready to use:
+
+ ```command
+ systemctl status redis
+ ```
+
+1. Verify that Redis is listening at port `6379`:
+
+ ```command
+ ss -tunelp | grep 6379
+ ```
+
+ ```output
+ tcp LISTEN 0 511 0.0.0.0:6379 0.0.0.0:* uid:992 ino:73370 sk:1001 cgroup:/system.slice/redis.service <->
+ ```
+
+### Generating the Quay Configuration
+
+It’s time to install a copy of Quay. This guide uses the free and open source Project Quay version discussed earlier.
+
+1. Obtain a copy of Quay:
+
+ ```command
+ docker pull quay.io/projectquay/quay:v3.9.0
+ ```
+
+ ```output
+ v3.9.0: Pulling from projectquay/quay
+ 57168402cb72: Pull complete
+ 3d50b44561f0: Pull complete
+ e42a14c55ca9: Pull complete
+ 2d3027ebf95a: Pull complete
+ 0422499b4b00: Pull complete
+ 27f2a5fad2e5: Pull complete
+ 60b93bda04c7: Pull complete
+ 15f0806a68f5: Pull complete
+ Digest: sha256:633818d2122a463e3aad8febbdc607a2e4df95db38b308fad8c071a60518f0a5
+ Status: Downloaded newer image for quay.io/projectquay/quay:v3.9.0
+ quay.io/projectquay/quay:v3.9.0
+ ```
+
+1. Verify the download by running `echo` inside the container:
+
+ ```command
+ docker run quay.io/projectquay/quay:v3.9.0 /bin/echo "Welcome to the Docker World!"
+ ```
+
+ ```output
+ "Welcome to the Docker World!"
+ __ __
+ / \ / \ ______ _ _ __ __ __
+ / /\ / /\ \ / __ \ | | | | / \ \ \ / /
+ / / / / \ \ | | | | | | | | / /\ \ \ /
+ \ \ \ \ / / | |__| | | |__| | / ____ \ | |
+ \ \/ \ \/ / \_ ___/ \____/ /_/ \_\ |_|
+ \__/ \__/ \ \__
+ \___\ by Red Hat
+ Build, Store, and Distribute your Containers
+ Running '/bin/echo'
+ Welcome to the Docker World!
+ ```
+
+1. Start the application to allow access to the configuration settings. Because the system doesn’t have a public key configured, this setup uses port `8080` and an `http` connection.
+
+ ```command
+ docker run -p 8080:8080 quay.io/projectquay/quay:v3.9.0 config EXAMPLE_PASSWORD
+ ```
+
+ ```output
+ __ __
+ / \ / \ ______ _ _ __ __ __
+ / /\ / /\ \ / __ \ | | | | / \ \ \ / /
+ / / / / \ \ | | | | | | | | / /\ \ \ /
+ \ \ \ \ / / | |__| | | |__| | / ____ \ | |
+ \ \/ \ \/ / \_ ___/ \____/ /_/ \_\ |_|
+ \__/ \__/ \ \__
+ \___\ by Red Hat
+ Build, Store, and Distribute your Containers
+
+ Startup timestamp:
+ Fri Nov 10 02:00:57 UTC 2023
+
+ Running all default config services
+ 2023-11-10 02:00:58,247 INFO RPC interface 'supervisor' initialized
+ 2023-11-10 02:00:58,247 CRIT Server 'unix_http_server' running without any HTTP authentication checking
+ 2023-11-10 02:00:58,247 INFO supervisord started with pid 8
+ 2023-11-10 02:00:59,250 INFO spawned: 'stdout' with pid 25
+ 2023-11-10 02:00:59,252 INFO spawned: 'config-editor' with pid 26
+ 2023-11-10 02:00:59,254 INFO spawned: 'quotaregistrysizeworker' with pid 27
+ 2023-11-10 02:00:59,257 INFO spawned: 'quotatotalworker' with pid 28
+ 2023-11-10 02:01:00,321 INFO success: stdout entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
+ 2023-11-10 02:01:00,322 INFO success: config-editor entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
+ 2023-11-10 02:01:00,322 INFO success: quotaregistrysizeworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
+ 2023-11-10 02:01:00,322 INFO success: quotatotalworker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
+ config-editor stdout | time="2023-11-10T02:00:59Z" level=warning msg="An error occurred loading TLS: No public key provided for HTTPS. Server falling back to HTTP."
+ config-editor stdout | time="2023-11-10T02:00:59Z" level=info msg="Running the configuration editor with HTTP on port 8080 with username quayconfig"
+ ```
+
+1. Open a web browser and use the following address format to access Quay: `http://YOUR_IP_ADDRESS:8080/`.
+
+1. When requested, supply `quayconfig` as the username along with the password you chose in step three.
+
+1. You need to fill out seven fields across the **Server Configuration**, **Database**, and **Redis** sections:
+
+ - In the **Server Configuration** section, enter your Akamai Cloud Compute Instance's public IPv4 Address for the **Server Hostname** field:
+
+ 
+
+ - In the **Database** section, select **Postgres** from the **Database Type** dropdown menu:
+
+ 
+
+ Enter your Akamai Cloud Compute Instance's public IPv4 Address for the **Database Server** field. Enter `quay_registry` for both the **Username** and **Database Name** fields. For the **Password** field, enter the password you chose in step three.
+
+ - In the **Redis** section, enter your Akamai Cloud Compute Instance's public IPv4 Address for the **Redis Hostname** field:
+
+ 
+
+ {{< note >}}
+ See [Chapter 4. Configuring Red Hat Quay](https://access.redhat.com/documentation/en-us/red_hat_quay/3.3/html/deploy_red_hat_quay_-_basic/configuring_red_hat_quay) of the official documentation for further options to configure your Quay instance.
+ {{< /note >}}
+
+1. When done, click the **Validate Configuration Changes** button at the bottom of the screen. If successful, click the **Download** button to download the `quay-config.tar.gz` file.
+
+1. Return to the terminal and kill the running Quay server by pressing the CTRL+C key combination.
+
+1. Transfer the `quay-config.tar.gz` file to your user's home (`~/`) directory on your Akamai Cloud Compute Instance.
+
+1. Create storage and configuration directories, then copy the `quay-config.tar.gz` file to the configuration directory:
+
+ ```command
+ sudo mkdir -p /data/quay/storage
+ sudo mkdir -p /data/quay/config
+ sudo cp quay-config.tar.gz /data/quay/config/
+ ```
+
+1. Change into the configuration directory and unarchive the required configuration data from the `quay-config.tar.gz` file:
+
+ ```command
+ cd /data/quay/config/
+ sudo tar xvf quay-config.tar.gz
+ ```
+
+1. Restart the Quay server with the new configuration:
+
+ ```command
+ docker run --restart=always -p 8080:8080 \
+ --sysctl net.core.somaxconn=4096 \
+ -v /data/quay/config:/conf/stack:Z \
+ -v /data/quay/storage:/datastorage:Z \
+ quay.io/projectquay/quay:v3.9.0
+ ```
+
+Starting Quay like this provides you with a continuously scrolling screen of updates so you can monitor registry activity.
+
+## Deploying a Quay Registry
+
+1. With server now configured to work with Quay, access the registry at `http://YOUR_IP ADDRESS:8080/` in your web browser. During your first access, you see the Quay repository screen:
+
+ 
+
+1. Click **Create Account** to create a new account. Once you create a new account, you see a new screen telling you that your account has no repositories:
+
+ 
+
+1. Click **Creating a New Repository** to create your first repository. The next screen begins by asking you for a repository name, which must use a number or lowercase letter (uppercase letters may cause Quay to reject the name).
+
+ 
+
+1. Choose between a public or private repository, then click the associated **Create** button.
+
+1. Filling the repository begins by issuing either a docker or a podman pull command to obtain the application container. After obtaining the code, you can add tags to it.
+
+1. There is also a **Repository Settings** tab where you can configure the repository details. Part of these settings is to add or remove users and adjust specific repository rights: read, write, and admin. You can also configure events and notifications based on repository activity to keep everyone on the project informed about changes.
+
+ 
+
+1. To create additional repositories, click the **+** icon, then choose **New Repository** from the drop down list. [Chapter 1. Creating a repository](https://access.redhat.com/documentation/en-us/red_hat_quay/3.2/html/use_red_hat_quay/creating_a_repository) provides you with additional details on working with repositories using Quay.
+
+## Conclusion
+
+Quay provides the means to enhance container development in three ways: improved security, greater flexibility, and enhanced developer productivity. To obtain these benefits, you need to perform a multi-part installation of a number of software products. This is so that Quay has the resources needed to perform the required tasks. However, this adds to development environment complexity. Consequently, there is a tradeoff between the benefits of Quay and the associated need to perform additional configuration tasks. If you want to develop something beyond a simple project, also consider the costs of using Red Hat Quay instead of Project Quay.
\ No newline at end of file
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-create-repositories-screen.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-create-repositories-screen.png
new file mode 100644
index 00000000000..047698ad648
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-create-repositories-screen.png differ
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-login-screen.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-login-screen.png
new file mode 100644
index 00000000000..4e82d60dd27
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-login-screen.png differ
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-new-user-empty-repositories-screen.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-new-user-empty-repositories-screen.png
new file mode 100644
index 00000000000..6bf856b3864
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-new-user-empty-repositories-screen.png differ
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-repository-settings-screen.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-repository-settings-screen.png
new file mode 100644
index 00000000000..da2de31d372
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-repository-settings-screen.png differ
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-database.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-database.png
new file mode 100644
index 00000000000..4345b873db4
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-database.png differ
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-redis.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-redis.png
new file mode 100644
index 00000000000..5ff520636e0
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-redis.png differ
diff --git a/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-server-configuration.png b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-server-configuration.png
new file mode 100644
index 00000000000..341c9010608
Binary files /dev/null and b/docs/guides/applications/containers/build-a-cloud-native-private-registry-with-quay/project-quay-setup-server-configuration.png differ
diff --git a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-centos-5/index.md b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-centos-5/index.md
index 3d6e1166f0d..b110362b33a 100644
--- a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-centos-5/index.md
+++ b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-centos-5/index.md
@@ -1,7 +1,7 @@
---
slug: power-team-collaboration-with-egroupware-on-centos-5
-title: Power Team Collaboration with eGroupware on CentOS 5
-description: 'This guide shows how you can build a collaborative groupware system to share information in your organization with the eGroupware software on CentOS 5.'
+title: Power Team Collaboration with EGroupware on CentOS 5
+description: 'This guide shows how you can build a collaborative groupware system to share information in your organization with the EGroupware software on CentOS 5.'
authors: ["Linode"]
contributors: ["Linode"]
published: 2010-02-03
@@ -11,9 +11,9 @@ tags: ["centos", "email", "lamp"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
aliases: ['/applications/project-management/power-team-collaboration-with-egroupware-on-centos-5/','/web-applications/project-management/egroupware/centos-5/']
external_resources:
- - '[eGroupware Home Page](http://www.egroupware.org/)'
- - '[eGroupware Documentation](http://www.egroupware.org/wiki/)'
- - '[eGroupware Applications](http://www.egroupware.org/applications)'
+ - '[EGroupware Home Page](http://www.egroupware.org/)'
+ - '[EGroupware Documentation](http://www.egroupware.org/wiki/)'
+ - '[EGroupware Applications](http://www.egroupware.org/applications)'
relations:
platform:
key: collaborate-with-egroupware
@@ -22,13 +22,13 @@ relations:
deprecated: true
---
-The eGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems, including the calendar, CRM, and email systems. eGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups, all without the need to rely on a third-party vendor. As eGroupware provides its applications entirely independent of any third party service, the suite is a good option for organizations who need web-based groupware solutions, but do not want to rely on a third party provider for these services.
+The EGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems, including the calendar, CRM, and email systems. EGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups, all without the need to rely on a third-party vendor. As EGroupware provides its applications entirely independent of any third party service, the suite is a good option for organizations who need web-based groupware solutions, but do not want to rely on a third party provider for these services.
-Before installing eGroupware, we assume that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/).Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-centos-5/) as a prerequisite for installing eGroupware.
+Before installing EGroupware, we assume that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/).Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-centos-5/) as a prerequisite for installing EGroupware.
-## Install eGroupware
+## Install EGroupware
-In this guide, we will be installing eGroupware from the packages provided by the eGroupware project and built by the openSUSE build service for CentOS 5. We've chosen to install using this method in an effort to ensure greater stability, easy upgrade paths, and a more straight forward installation process. Begin the installation by issuing the following commands to initialize the eGroupware repositories:
+In this guide, we will be installing EGroupware from the packages provided by the EGroupware project and built by the openSUSE build service for CentOS 5. We've chosen to install using this method in an effort to ensure greater stability, easy upgrade paths, and a more straight forward installation process. Begin the installation by issuing the following commands to initialize the EGroupware repositories:
yum update
yum install wget
@@ -36,15 +36,15 @@ In this guide, we will be installing eGroupware from the packages provided by th
wget http://download.opensuse.org/repositories/server:/eGroupWare/CentOS_5/server:eGroupWare.repo
yum update
-Now you can issue the following command to install eGroupware and other required packages:
+Now you can issue the following command to install EGroupware and other required packages:
- yum install eGroupware mysql-server
+ yum install EGroupware mysql-server
-Congratulations, you've now installed eGroupware!
+Congratulations, you've now installed EGroupware!
-## Configure Access to eGroupware
+## Configure Access to EGroupware
-The configuration options for eGroupware are located in the file `/etc/httpd/conf.d/egroupware`. Add the following line to your virtual hosting configuration:
+The configuration options for EGroupware are located in the file `/etc/httpd/conf.d/egroupware`. Add the following line to your virtual hosting configuration:
{{< file "Apache Virtual Hosting Configuration" apache >}}
Alias /egroupware /usr/share/egroupware
@@ -52,18 +52,18 @@ Alias /egroupware /usr/share/egroupware
{{< /file >}}
-When inserted into the virtual hosting configuration for `example.com`, accessing the URL `http://example.com/egroupware/` will allow you to access your eGroupware site. If you do not have virtual hosting configured, eGroupware will be accessible at `/egroupware` of the default Apache host.
+When inserted into the virtual hosting configuration for `example.com`, accessing the URL `http://example.com/egroupware/` will allow you to access your EGroupware site. If you do not have virtual hosting configured, EGroupware will be accessible at `/egroupware` of the default Apache host.
-Before continuing with the installation of eGroupware, issue the following commands to start the webserver and database server for the first time. Furthermore the `chkconfig` commands will ensure that these services are initiated following reboots:
+Before continuing with the installation of EGroupware, issue the following commands to start the webserver and database server for the first time. Furthermore the `chkconfig` commands will ensure that these services are initiated following reboots:
/etc/init.d/httpd start
/etc/init.d/mysqld start
chkconfig mysqld on
chkconfig httpd on
-## Configure eGroupware
+## Configure EGroupware
-Before we begin the configuration of eGroupware, we need to ensure that a number of directories exist for use by eGroupware. Issue the following sequence of commands:
+Before we begin the configuration of EGroupware, we need to ensure that a number of directories exist for use by EGroupware. Issue the following sequence of commands:
mkdir -p /srv/www/example.com/backup
mkdir -p /srv/www/example.com/tmp
@@ -72,6 +72,6 @@ Before we begin the configuration of eGroupware, we need to ensure that a number
chown apache:apache -R /srv/www/example.com/tmp
chown apache:apache -R /srv/www/example.com/files
-Visit `http://example.com/egroupware/setup/` in your web browser to begin the setup process presented by the eGroupware application. When you have completed the initial "Header Setup" process, select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture, you must install the eGroupware applications that you will expect to use. Select the proper character set and select the button to "'Install' all applications." You can now "Recheck" your installation. In the "Configuration" setup page, supply eGroupware with paths to the `backup/` `tmp/` and `files/` directory created above. Additionally, you will need to create an admin account for your eGroupware domain, which you can accomplish from this Setup Domain page.
+Visit `http://example.com/egroupware/setup/` in your web browser to begin the setup process presented by the EGroupware application. When you have completed the initial "Header Setup" process, select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture, you must install the EGroupware applications that you will expect to use. Select the proper character set and select the button to "'Install' all applications." You can now "Recheck" your installation. In the "Configuration" setup page, supply EGroupware with paths to the `backup/` `tmp/` and `files/` directory created above. Additionally, you will need to create an admin account for your EGroupware domain, which you can accomplish from this Setup Domain page.
-When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your eGroupware instance. If you wish to use eGroupware to help manage email, you will need to have a running email system.
+When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your EGroupware instance. If you wish to use EGroupware to help manage email, you will need to have a running email system.
diff --git a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-debian-5-lenny/index.md b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-debian-5-lenny/index.md
index 7fc1742973c..904300f7457 100644
--- a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-debian-5-lenny/index.md
+++ b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-debian-5-lenny/index.md
@@ -1,7 +1,7 @@
---
slug: power-team-collaboration-with-egroupware-on-debian-5-lenny
-title: 'Power Team Collaboration with eGroupware on Debian 5 (Lenny)'
-description: 'This guide shows how you can build a collaborative groupware system to share information in your organization with the eGroupware software on Debian 5 "Lenny".'
+title: 'Power Team Collaboration with EGroupware on Debian 5 (Lenny)'
+description: 'This guide shows how you can build a collaborative groupware system to share information in your organization with the EGroupware software on Debian 5 "Lenny".'
authors: ["Linode"]
contributors: ["Linode"]
published: 2010-01-26
@@ -18,24 +18,24 @@ relations:
deprecated: true
---
-The eGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems, including the calendar, CRM, and email systems. eGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups, all without the need to rely on a third-party vendor. As eGroupware provides its applications entirely independent of any third party service, the suite is a good option for organizations who need web-based groupware solutions, but do not want to rely on a third party provider for these services.
+The EGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems, including the calendar, CRM, and email systems. EGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups, all without the need to rely on a third-party vendor. As EGroupware provides its applications entirely independent of any third party service, the suite is a good option for organizations who need web-based groupware solutions, but do not want to rely on a third party provider for these services.
-Before installing eGroupware we assume that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/). Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-debian-5-lenny/) as a prerequisite for installing eGroupware. You may also want to use eGroupware to help manage email, and will need to have a running email system. Consider running [Postfix with Courier and MySQL](/docs/guides/email-with-postfix-courier-and-mysql-on-debian-5-lenny/).
+Before installing EGroupware we assume that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/). Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-debian-5-lenny/) as a prerequisite for installing EGroupware. You may also want to use EGroupware to help manage email, and will need to have a running email system. Consider running [Postfix with Courier and MySQL](/docs/guides/email-with-postfix-courier-and-mysql-on-debian-5-lenny/).
-## Install eGroupware
+## Install EGroupware
Make sure your package repositories and installed programs are up to date by issuing the following commands:
apt-get update
apt-get upgrade --show-upgraded
-In this guide we will be installing eGroupware from the packages provided by the Debian project. Although there are some slightly more contemporary versions of eGroupware available from upstream sources, we've chosen to install using this method in an effort to ensure greater stability, easy upgrade paths, and a more straight forward installation process. Begin the installation by issuing the following command:
+In this guide we will be installing EGroupware from the packages provided by the Debian project. Although there are some slightly more contemporary versions of EGroupware available from upstream sources, we've chosen to install using this method in an effort to ensure greater stability, easy upgrade paths, and a more straight forward installation process. Begin the installation by issuing the following command:
apt-get install egroupware
During the installation process, an interactive "package configuration" is provided by Debian's `debconf` system. This provides a URL to access the setup utility after you've finished the installation process. Make note of the URL, the form is `http://example/egroupware/setup/` where `example` is the hostname of the system. Continue with the installation process.
-The `debconf` process creates an administrator account, which it allows you to specify at this time. By default this eGroupware username is "admin." Change the username, if you like, and create a password as instructed. When this process is complete, the installation process is finished. You will also want to issue the following commands to install additional dependencies and resolve several minor issues with the distribution package in order:
+The `debconf` process creates an administrator account, which it allows you to specify at this time. By default this EGroupware username is "admin." Change the username, if you like, and create a password as instructed. When this process is complete, the installation process is finished. You will also want to issue the following commands to install additional dependencies and resolve several minor issues with the distribution package in order:
pear install Auth_SASL
rm /usr/share/egroupware/etemplate/doc
@@ -43,29 +43,29 @@ The `debconf` process creates an administrator account, which it allows you to s
rm /usr/share/egroupware/sitemgr/doc
cp -R /usr/share/doc/egroupware-sitemgr/ /usr/share/egroupware/sitemgr/doc
-Congratulations, you've now installed eGroupware!
+Congratulations, you've now installed EGroupware!
-## Configure Access to eGroupware
+## Configure Access to EGroupware
-If you do not have any virtual hosts enabled and your domain is `example.com`, you should be able to visit `http://example.com/egroupware/setup` to access the remainder of the eGroupware setup provided that `example.com` points to the IP of your server. However, if you have virtual hosting setup, you will need to issue the following command to create a symbolic link to eGroupware:
+If you do not have any virtual hosts enabled and your domain is `example.com`, you should be able to visit `http://example.com/egroupware/setup` to access the remainder of the EGroupware setup provided that `example.com` points to the IP of your server. However, if you have virtual hosting setup, you will need to issue the following command to create a symbolic link to EGroupware:
ln -s /usr/share/egroupware/ /srv/www/example.com/public_html/egroupware
-Replace `/srv/example.com/public_html/` with the path to your virtual host's `DocumentRoot`, or other location within the `DocumentRoot` where you want eGroupware to be located. Then, visit `http://example.com/egroupware/setup/` to complete the setup process. You will be prompted for a password then brought to a configuration interface. Review the settings and modify them to reflect the specifics of your deployment, particularly the database settings. Do not forget to create an administrative user for the database instance you are creating. When you have completed the eGroupware configuration, select the "'Write' Configuration file" option. Continue to the "Login" page.
+Replace `/srv/example.com/public_html/` with the path to your virtual host's `DocumentRoot`, or other location within the `DocumentRoot` where you want EGroupware to be located. Then, visit `http://example.com/egroupware/setup/` to complete the setup process. You will be prompted for a password then brought to a configuration interface. Review the settings and modify them to reflect the specifics of your deployment, particularly the database settings. Do not forget to create an administrative user for the database instance you are creating. When you have completed the EGroupware configuration, select the "'Write' Configuration file" option. Continue to the "Login" page.
-## Configure eGroupware
+## Configure EGroupware
-When you have completed the initial "Header Setup," select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture you must install the eGroupware applications that you will expect to use. Select the proper character set and select the button to "'Install' all applications." You can now "Recheck" your installation. Supply eGroupware with the configuration for your email server. Additionally, you will need to create an admin account for your eGroupware domain, which you can accomplish from this page.
+When you have completed the initial "Header Setup," select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture you must install the EGroupware applications that you will expect to use. Select the proper character set and select the button to "'Install' all applications." You can now "Recheck" your installation. Supply EGroupware with the configuration for your email server. Additionally, you will need to create an admin account for your EGroupware domain, which you can accomplish from this page.
-When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your eGroupware instance. You've now successfully installed and configured eGroupware!
+When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your EGroupware instance. You've now successfully installed and configured EGroupware!
## More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
-- [eGroupware Home Page](http://www.egroupware.org/)
-- [eGroupware Documentation](http://www.egroupware.org/wiki/)
-- [eGroupware Applications](http://www.egroupware.org/applications)
+- [EGroupware Home Page](http://www.egroupware.org/)
+- [EGroupware Documentation](http://www.egroupware.org/wiki/)
+- [EGroupware Applications](http://www.egroupware.org/applications)
diff --git a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-fedora-13/index.md b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-fedora-13/index.md
index 78bf39a5553..26b985ab588 100644
--- a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-fedora-13/index.md
+++ b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-fedora-13/index.md
@@ -1,7 +1,7 @@
---
slug: power-team-collaboration-with-egroupware-on-fedora-13
-title: Power Team Collaboration with eGroupware on Fedora 13
-description: 'This guide shows how you can build a collaborative groupware system to share information in your organization with the eGroupware software on Fedora 13.'
+title: Power Team Collaboration with EGroupware on Fedora 13
+description: 'This guide shows how you can build a collaborative groupware system to share information in your organization with the EGroupware software on Fedora 13.'
authors: ["Linode"]
contributors: ["Linode"]
published: 2010-09-16
@@ -18,13 +18,13 @@ relations:
deprecated: true
---
-The eGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems including the calendar, CRM, and email systems. eGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups without the need to rely on a third-party vendor.
+The EGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems including the calendar, CRM, and email systems. EGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups without the need to rely on a third-party vendor.
-Before installing eGroupware, it is assumed that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/). Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-fedora-13/) as a prerequisite for installing eGroupware.
+Before installing EGroupware, it is assumed that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/). Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-fedora-13/) as a prerequisite for installing EGroupware.
-## Install eGroupware
+## Install EGroupware
-In this guide, you will be installing eGroupware from the packages provided by the eGroupware project and built by the openSUSE build service for Fedora 13. Begin the installation by issuing the following commands to initialize the eGroupware repositories:
+In this guide, you will be installing EGroupware from the packages provided by the EGroupware project and built by the openSUSE build service for Fedora 13. Begin the installation by issuing the following commands to initialize the EGroupware repositories:
yum update
yum install wget
@@ -32,15 +32,15 @@ In this guide, you will be installing eGroupware from the packages provided by t
wget http://download.opensuse.org/repositories/server:/eGroupWare/Fedora_13/server:eGroupWare.repo
yum update
-Now you can issue the following command to install the eGroupware package:
+Now you can issue the following command to install the EGroupware package:
- yum install eGroupware
+ yum install EGroupware
-Congratulations, you've now installed eGroupware!
+Congratulations, you've now installed EGroupware!
-## Configure Access to eGroupware
+## Configure Access to EGroupware
-The configuration options for eGroupware are located in the file `/etc/httpd/conf.d/egroupware`. Add the following line to your virtual hosting configuration:
+The configuration options for EGroupware are located in the file `/etc/httpd/conf.d/egroupware`. Add the following line to your virtual hosting configuration:
{{< file "Apache Virtual Hosting Configuration" apache >}}
Alias /egroupware /usr/share/egroupware
@@ -48,11 +48,11 @@ Alias /egroupware /usr/share/egroupware
{{< /file >}}
-When inserted into the virtual hosting configuration for `example.com`, accessing the URL `http://example.com/egroupware/` will allow you to access your eGroupware site. If you do not have virtual hosting configured, eGroupware will be accessible at `/egroupware` of the default Apache host.
+When inserted into the virtual hosting configuration for `example.com`, accessing the URL `http://example.com/egroupware/` will allow you to access your EGroupware site. If you do not have virtual hosting configured, EGroupware will be accessible at `/egroupware` of the default Apache host.
-## Configure eGroupware
+## Configure EGroupware
-Before you begin the configuration of eGroupware, you need to ensure that a number of directories exist for use by eGroupware. Issue the following sequence of commands:
+Before you begin the configuration of EGroupware, you need to ensure that a number of directories exist for use by EGroupware. Issue the following sequence of commands:
mkdir -p /srv/www/example.com/backup
mkdir -p /srv/www/example.com/tmp
@@ -61,17 +61,17 @@ Before you begin the configuration of eGroupware, you need to ensure that a numb
chown apache:apache -R /srv/www/example.com/tmp
chown apache:apache -R /srv/www/example.com/files
-Visit `http://example.com/egroupware/setup/` in your web browser to begin the setup process presented by the eGroupware application. When you have completed the initial "Header Setup" process, select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture, you must install the eGroupware applications that you will expect to use. Select the proper character set and select the button to "'Install' all applications." You can now "Recheck" your installation. In the "Configuration" setup page, supply eGroupware with paths to the `backup/` `tmp/` and `files/` directory created above. Additionally, you will need to create an admin account for your eGroupware domain, which you can accomplish from this Setup Domain page.
+Visit `http://example.com/egroupware/setup/` in your web browser to begin the setup process presented by the EGroupware application. When you have completed the initial "Header Setup" process, select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture, you must install the EGroupware applications that you will expect to use. Select the proper character set and select the button to "'Install' all applications." You can now "Recheck" your installation. In the "Configuration" setup page, supply EGroupware with paths to the `backup/` `tmp/` and `files/` directory created above. Additionally, you will need to create an admin account for your EGroupware domain, which you can accomplish from this Setup Domain page.
-When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your eGroupware instance. If you wish to use eGroupware to help manage email, you will need to have a running email system.
+When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your EGroupware instance. If you wish to use EGroupware to help manage email, you will need to have a running email system.
## More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
-- [eGroupware Home Page](http://www.egroupware.org/)
-- [eGroupware Documentation](http://www.egroupware.org/wiki/)
-- [eGroupware Applications](http://www.egroupware.org/applications)
+- [EGroupware Home Page](http://www.egroupware.org/)
+- [EGroupware Documentation](http://www.egroupware.org/wiki/)
+- [EGroupware Applications](http://www.egroupware.org/applications)
diff --git a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-ubuntu-9-10-karmic/index.md b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-ubuntu-9-10-karmic/index.md
index 0ac8f50c1c0..f52128f9b4e 100644
--- a/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-ubuntu-9-10-karmic/index.md
+++ b/docs/guides/applications/project-management/power-team-collaboration-with-egroupware-on-ubuntu-9-10-karmic/index.md
@@ -1,7 +1,7 @@
---
slug: power-team-collaboration-with-egroupware-on-ubuntu-9-10-karmic
-title: 'Power Team Collaboration with eGroupware on Ubuntu 9.10 (Karmic)'
-description: 'This guide shows how to install and build a groupware system using eGroupware, which provides a group of server-side apps for collaboration on Ubuntu 9.10 "Karmic".'
+title: 'Power Team Collaboration with EGroupware on Ubuntu 9.10 (Karmic)'
+description: 'This guide shows how to install and build a groupware system using EGroupware, which provides a group of server-side apps for collaboration on Ubuntu 9.10 "Karmic".'
authors: ["Linode"]
contributors: ["Linode"]
published: 2010-02-01
@@ -18,18 +18,18 @@ relations:
deprecated: true
---
-The eGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems, including the calendar, CRM, and email systems. eGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups, all without the need to rely on a third-party vendor. As eGroupware provides its applications entirely independent of any third party service, the suite is a good option for organizations who need web-based groupware solutions, but do not want to rely on a third party provider for these services.
+The EGroupware suite provides a group of server-based applications that offer collaboration and enterprise-targeted tools to help enable communication and information sharing between teams and institutions. These tools are tightly coupled and allow users to take advantage of data from one system, like the address book, and make use of it in other systems, including the calendar, CRM, and email systems. EGroupware is designed to be flexible and adaptable, and is capable of scaling to meet the demands of a diverse class of enterprise needs and work groups, all without the need to rely on a third-party vendor. As EGroupware provides its applications entirely independent of any third party service, the suite is a good option for organizations who need web-based groupware solutions, but do not want to rely on a third party provider for these services.
-Before installing eGroupware, we assume that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/). Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-ubuntu-9-10-karmic/) as a prerequisite for installing eGroupware.
+Before installing EGroupware, we assume that you have followed our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/). If you're new to Linux server administration, you may be interested in our [introduction to Linux concepts guide](/docs/guides/introduction-to-linux-concepts/), [beginner's guide](/docs/products/compute/compute-instances/faqs/) and [administration basics guide](/docs/guides/linux-system-administration-basics/). Additionally, you will need install a [LAMP stack](/docs/guides/lamp-server-on-ubuntu-9-10-karmic/) as a prerequisite for installing EGroupware.
-## Install eGroupware
+## Install EGroupware
Make sure your package repositories and installed programs are up to date by issuing the following commands:
apt-get update
apt-get upgrade --show-upgraded
-In this guide we will be installing eGroupware from the packages provided by the Ubuntu community. Although there are some slightly more contemporary versions of eGroupware available from upstream sources, we've chosen to install using this method in an effort to ensure greater stability, easy upgrade paths, and a more straight forward installation process. Before we begin the installation, we must enable the "universe" repositories for Ubuntu 9.10. Uncomment the following lines from `/etc/apt/sources.list` to make these repositories accessible:
+In this guide we will be installing EGroupware from the packages provided by the Ubuntu community. Although there are some slightly more contemporary versions of EGroupware available from upstream sources, we've chosen to install using this method in an effort to ensure greater stability, easy upgrade paths, and a more straight forward installation process. Before we begin the installation, we must enable the "universe" repositories for Ubuntu 9.10. Uncomment the following lines from `/etc/apt/sources.list` to make these repositories accessible:
{{< file "/etc/apt/sources.list" >}}
deb http://us.archive.ubuntu.com/ubuntu/ karmic universe
@@ -53,7 +53,7 @@ Finally, begin the installation by issuing the following command:
During the installation process, an interactive "package configuration" is provided by Ubuntu's `debconf` system. This provides a URL to access the setup utility after you've finished the installation process. Make note of the URL; the form is `http://example/egroupware/setup/` where `example` is the hostname of the system. Continue with the installation process.
-The `debconf` process creates an administrator account for the "header system", which it allows you to specify at this time. By default this eGroupware username is "admin". Change the username if you would like and create a password as instructed. When this process is complete, the installation process is finished. You will also want to issue the following commands to install additional dependencies and resolve several minor issues with the distribution package in order:
+The `debconf` process creates an administrator account for the "header system", which it allows you to specify at this time. By default this EGroupware username is "admin". Change the username if you would like and create a password as instructed. When this process is complete, the installation process is finished. You will also want to issue the following commands to install additional dependencies and resolve several minor issues with the distribution package in order:
pear install Auth_SASL
rm /usr/share/egroupware/etemplate/doc
@@ -71,29 +71,29 @@ mbstring.func_overload = 7
{{< /file >}}
-Congratulations, you've now installed eGroupware!
+Congratulations, you've now installed EGroupware!
-## Configure Access to eGroupware
+## Configure Access to EGroupware
-If you do not have any virtual hosts enabled and your domain is `example.com`, you should be able to visit `http://example.com/egroupware/setup` to access the remainder of the eGroupware setup provided that `example.com` points to the IP of your Linode. However, if you have virtual hosting setup, you will need to issue the following command to create a symbolic link to eGroupware:
+If you do not have any virtual hosts enabled and your domain is `example.com`, you should be able to visit `http://example.com/egroupware/setup` to access the remainder of the EGroupware setup provided that `example.com` points to the IP of your Linode. However, if you have virtual hosting setup, you will need to issue the following command to create a symbolic link to EGroupware:
ln -s /usr/share/egroupware/ /srv/www/example.com/public_html/egroupware
-Replace `/srv/example.com/public_html/` with the path to your virtual host's `DocumentRoot`, or other location within the `DocumentRoot` where you want eGroupware to be located. Then, visit `http://example.com/egroupware/setup/` to complete the setup process. You will be prompted for a password then brought to a configuration interface. Review the settings and modify them to reflect the specifics of your deployment, and create an "eGW Database Instance" for your deployment. Do not forget to create an administrative user for the database instance you are creating. When you have completed the eGroupware configuration, select the "'Write' Configuration file" option. Continue to the "Login" page.
+Replace `/srv/example.com/public_html/` with the path to your virtual host's `DocumentRoot`, or other location within the `DocumentRoot` where you want EGroupware to be located. Then, visit `http://example.com/egroupware/setup/` to complete the setup process. You will be prompted for a password then brought to a configuration interface. Review the settings and modify them to reflect the specifics of your deployment, and create an "eGW Database Instance" for your deployment. Do not forget to create an administrative user for the database instance you are creating. When you have completed the EGroupware configuration, select the "'Write' Configuration file" option. Continue to the "Login" page.
-## Configure eGroupware
+## Configure EGroupware
-When you have completed the initial "Header Setup," select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture you must install the eGroupware applications that you will expect to use. Select the proper character set and then select the button to "'Install' all applications." You can now "Recheck" your installation. Supply eGroupware with the configuration for your email server. Additionally, you will need to create an admin account for your eGroupware domain, which you can accomplish from this page.
+When you have completed the initial "Header Setup," select the option to write the "header" file and then continue to the "Setup/Admin." Ensure that you've selected the correct "Domain" if you configured more than one. At this juncture you must install the EGroupware applications that you will expect to use. Select the proper character set and then select the button to "'Install' all applications." You can now "Recheck" your installation. Supply EGroupware with the configuration for your email server. Additionally, you will need to create an admin account for your EGroupware domain, which you can accomplish from this page.
-When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your eGroupware instance. If you wish to use eGroupware to help manage email, you will need to have a running email system. Consider running [Postfix with Courier and MySQL](/docs/guides/email-with-postfix-courier-and-mysql-on-ubuntu-9-10-karmic/).
+When all applications have been installed, you will be provided with a number of options that you can use to fine-tune the operations and behavior of your EGroupware instance. If you wish to use EGroupware to help manage email, you will need to have a running email system. Consider running [Postfix with Courier and MySQL](/docs/guides/email-with-postfix-courier-and-mysql-on-ubuntu-9-10-karmic/).
## More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
-- [eGroupware Home Page](http://www.egroupware.org/)
-- [eGroupware Documentation](http://www.egroupware.org/wiki/)
-- [eGroupware Applications](http://www.egroupware.org/applications)
+- [EGroupware Home Page](http://www.egroupware.org/)
+- [EGroupware Documentation](http://www.egroupware.org/wiki/)
+- [EGroupware Applications](http://www.egroupware.org/applications)
diff --git a/docs/guides/databases/elasticsearch/a-guide-to-elasticsearch-plugins/index.md b/docs/guides/databases/elasticsearch/a-guide-to-elasticsearch-plugins/index.md
index 6144c9da97e..9c9ec784728 100644
--- a/docs/guides/databases/elasticsearch/a-guide-to-elasticsearch-plugins/index.md
+++ b/docs/guides/databases/elasticsearch/a-guide-to-elasticsearch-plugins/index.md
@@ -18,7 +18,7 @@ tags: ["ubuntu","debian","database","java"]
aliases: ['/databases/elasticsearch/a-guide-to-elasticsearch-plugins/']
---
-
+
## What are Elasticsearch Plugins?
diff --git a/docs/guides/databases/general/how-to-install-and-use-replibyte/index.md b/docs/guides/databases/general/how-to-install-and-use-replibyte/index.md
new file mode 100644
index 00000000000..bb7417d1093
--- /dev/null
+++ b/docs/guides/databases/general/how-to-install-and-use-replibyte/index.md
@@ -0,0 +1,417 @@
+---
+slug: how-to-install-and-use-replibyte
+title: "How to Install and Use Replibyte to Assist with Database Development"
+description: "This guide explains how to use Replibyte to seed a database with transformed production data."
+authors: ['Jeff Novotny']
+contributors: ['Jeff Novotny']
+published: 2023-02-24
+modified: 2024-05-02
+keywords: ['use Replibyte', 'install Replibyte', 'Replibyte Linux', 'transform database Replibyte']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[Replibyte Introduction](https://www.replibyte.com/docs/introduction)'
+- '[Replibyte GitHub page](https://github.com/Qovery/Replibyte)'
+- '[Replibyte Database Support](https://www.replibyte.com/docs/databases)'
+- '[Replibyte Datastore Support](https://www.replibyte.com/docs/datastores)'
+- '[How to Use Replibyte Transformers](https://www.replibyte.com/docs/transformers)'
+---
+
+Database testing is a critical component of the quality-assurance cycle, but using production data is inherently insecure. Additionally, the sheer volume of data can be difficult to work with. Unfortunately, it is difficult to create realistic "fake" data, and the results might not be representative. [Replibyte](https://www.replibyte.com/docs/introduction) (also stylized as *RepliByte*) allows users to transform their production data and use the results to seed a test database. This guide explains how to install Replibyte and how to use it to transform a dataset.
+
+## What is Replibyte?
+
+Replibyte transforms existing production data into seed data suitable for non-production environments including development, testing, and customer demos. It prevents unauthorized personnel, such as research and development engineers, from obtaining access to the live data. This anonymous dataset also ensures the integrity of the original data, preventing it from being accidentally altered and then redeployed at a later time.
+
+The transformation process obscures any sensitive real-world details to enhance security while retaining the essential characteristics of the original data set. This ensures the resulting database is based on real-world data. The new database should be roughly equivalent to the production database in terms of the number of rows and distribution of data it contains.
+
+To help users effectively deal with very large databases, Replibyte can subset the original data. This operation restricts the new datastore to a subsection of the original data. A smaller database is easier to work with, requires less bandwidth to transfer, and takes less time to search.
+
+The complete Replibyte process from source to destination database follows the following steps:
+
+1. Replibyte accesses the *source* database and takes a full SQL dump of the data. The source database can either be on the same system or on a remote system.
+1. Replibyte reads and parses the data.
+1. (Optional) For some database types, Replibyte can scale the original data down to a fractional subset. This process shrinks the number of database entries to a certain percentage of the original. The subset operation is currently only supported on PostgreSQL.
+1. Replibyte transforms the original database records, changing or hiding the values of one or more columns. These operations can obfuscate or trim the data, randomize strings, or auto-generate completely new values.
+1. (Optional) Replibyte can compress the data to reduce storage requirements. It can also encrypt the modified data.
+1. The modified data is known as the *dump data*. Replibyte writes this dump data to a *datastore*. A datastore can either reside on the local system or inside cloud storage. Along with the modified data, Replibyte creates an index file enumerating the conversions.
+1. When requested, Replibyte retrieves the modified data and the index file from the datastore. It parses the index file and decrypts or decompresses it as required, restoring the dump data.
+1. Replibyte copies the data to the destination database. The user can access this database like any other.
+
+Some of the features and advantages of Replibyte include the following:
+
+- It is relatively easy to install and use. Replibyte is lightweight and stateless and does not require its own server or daemon.
+- It supports MySQL/MariaDB, PostgreSQL, and MongoDB as the source/destination database for the backup and restore procedures. The [Replibyte Database Documentation page](https://www.replibyte.com/docs/databases) provides full information.
+- It can store a datastore on either a local disk or in the cloud, including inside a Linode Object Storage solution. See the [Replibyte Datastore information](https://www.replibyte.com/docs/datastores) for more details about the possible cloud options.
+- It supports a full complement of transformers. It can randomize a string, keep the first character only, or obfuscate data. It can also auto-generate an email address, first name, phone number, or credit card number. Fields can also be left at their original values to enable specific tests. Users are permitted to create custom transformers.
+- It can work on large databases containing over 10GB of data.
+- For PostgreSQL only, a database subset feature allows users to limit the number of entries in a database. This feature is not yet supported on MySQL.
+- It uses Zlib for compression and AES-256 for encryption.
+
+Replibyte does have a couple of limitations. As of this writing, it is not possible to copy the contents of the datastore directly into a local database, only a remote database. The only exception to this rule is if the local database instance is running inside a Docker container. The second limitation is the source database containing the original data and the destination database must have the same type. For instance, a transformed copy of a MySQL database can only be copied into another MySQL or MariaDB database.
+
+## What are the Replibyte Transformer Types?
+
+Each transformer operates on the data in a different way. In most cases, a transformer randomizes, redacts, or hides the original data. However, the `transient` transformer is a "no-op" that leaves the original data intact. The transformer types are as follows.
+
+- `email`: Generates a valid email address.
+- `first-name`: Changes the existing value to a valid first name.
+- `phone-number`: Creates a valid phone number. This can only be applied to a string field, not an integer.
+- `random`: Randomizes a string, maintaining the same string length.
+- `keep-first-char`: Trims a value to its first letter only.
+- `credit-card`: Generates a credit card number in the correct format. This transformer can only act on strings, not integers.
+- `redacted`: Hides the original data using the `*` symbol.
+- `transient`: Leaves the original data unaltered. This must be applied to keys and to other columns that must remain legible.
+
+Transformers are applied on a per-column or per-key basis. The same transformer acts on the same column for all records inside a given table. For more information on transformers, along with examples of how to use them, see the [Replibyte transformer documentation](https://www.replibyte.com/docs/transformers).
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+{{< note >}}
+The steps in this guide are written for non-root users. Commands that require elevated privileges are prefixed with `sudo`. If you are not familiar with the `sudo` command, see the [Linux Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## How to Install Replibyte
+
+Replibyte is fairly easy to install. Although Replibyte is available for Windows, Linux, and MacOS, these instructions are geared toward Ubuntu 22.04 LTS users. However, they are generally applicable to all Linux distributions. To install Replibyte, follow these steps.
+
+1. Update the system. Reboot it if necessary.
+
+ ```command
+ sudo apt-get update -y && sudo apt-get upgrade -y
+ ```
+
+1. Install the `jq` utility.
+
+ ```command
+ sudo apt install jq
+ ```
+
+1. Use `curl` to download the Replibyte archive. This command runs in the background and prints a message to the shell when it is done.
+
+ ```command
+ curl -s https://api.github.com/repos/Qovery/replibyte/releases/latest | jq -r '.assets[].browser_download_url' | grep -i 'linux-musl.tar.gz$' | wget -qi - &
+ ```
+
+1. Extract the archive.
+
+ ```command
+ tar zxf *.tar.gz
+ ```
+
+1. Make the Replibyte application executable.
+
+ ```command
+ chmod +x replibyte
+ ```
+
+1. Move the program to a system directory.
+
+ ```command
+ sudo mv replibyte /usr/local/bin/
+ ```
+
+1. Enter the `replibyte -V` command to view the release number and confirm the application is installed correctly.
+
+ ```command
+ replibyte -V
+ ```
+
+ ```output
+ replibyte 0.10.0
+ ```
+
+1. To display the Replibyte help information, enter the `replibyte` command without any arguments.
+
+ ```command
+ replibyte
+ ```
+
+ ```output
+ Replibyte 0.10.0
+ Replibyte is a tool to seed your databases with your production data while keeping sensitive data safe, just pass `-h`
+
+ USAGE:
+ replibyte [OPTIONS] --config
+
+ OPTIONS:
+ -c, --config Replibyte configuration file
+ -h, --help Print help information
+ -n, --no-telemetry disable telemetry
+ -V, --version Print version information
+
+ SUBCOMMANDS:
+ dump all dump commands
+ help Print this message or the help of the given subcommand(s)
+ source all source commands
+ transformer all transformer commands
+ ```
+
+## How to Configure Replibyte Using a YAML File
+
+Before populating a new database, Replibyte must create a datastore from the source database. The source database contains the original data that serves as a template for the new data. The datastore contains the modified data. After a datastore is created, it can be used to seed a new database.
+
+The `conf.yaml` file describes the source database along with a list of transformations to apply to the data. The transformations hide or sanitize any sensitive data in the original file. The YAML file also specifies information about the datastore. The datastore can be located either in the cloud or on the local disk. The destination database does not have to be specified when the datastore is created. The destination configuration is often added later when the data is required.
+
+{{< note >}}
+Before proceeding, a database application must be installed on the system. To access a remote source or destination database, the same database application must be installed locally. For instance, to use Replibyte to create a datastore based on a remote MySQL database, MySQL must also be installed locally. This guide uses MariaDB, but it includes instructions for the other database types. The source and destination database must use the same application, for instance, both have to be MySQL, or both have to be MongoDB.
+{{< /note >}}
+
+1. Identify the database to be transformed into seed data. To properly identify the database, the user name, password, host, port, and database name are required. This guide uses the local MariaDB database `sourcedb`. This database contains a table named `patients`, which has the following table description.
+
+ ```command
+ use sourcedb;
+ desc patients;
+ ```
+
+ ```output
+ +------------+-------------+------+-----+---------+-------+
+ | Field | Type | Null | Key | Default | Extra |
+ +------------+-------------+------+-----+---------+-------+
+ | userid | char(8) | YES | | NULL | |
+ | first_name | varchar(20) | YES | | NULL | |
+ | last_name | varchar(20) | YES | | NULL | |
+ | phone | char(10) | YES | | NULL | |
+ | email | varchar(30) | YES | | NULL | |
+ | unit | varchar(20) | YES | | NULL | |
+ | credit | char(16) | YES | | NULL | |
+ | socialnum | char(9) | YES | | NULL | |
+ +------------+-------------+------+-----+---------+-------+
+ 8 rows in set (0.001 sec)
+ ```
+
+ The `patients` table currently contains the following records.
+
+ ```command
+ SELECT * FROM patients;
+ ```
+
+ ```output
+ +----------+------------+-----------+------------+------------------+---------+------------------+-----------+
+ | userid | first_name | last_name | phone | email | unit | credit | socialnum |
+ +----------+------------+-----------+------------+------------------+---------+------------------+-----------+
+ | 13572468 | Bob | Jones | 1239876543 | bojones@isp1.com | Cardiac | 1122334455667788 | 222333444 |
+ | 13572469 | Jack | Smith | 1239871234 | jacks@isp2.com | Neuro | 2232334344545565 | 343333666 |
+ | 13572470 | John | Doe | 1234547359 | jjdoe43@isp4.com | Trauma | 3579468024683579 | 454454454 |
+ +----------+------------+-----------+------------+------------------+---------+------------------+-----------+
+ ```
+
+1. Create a new `conf.yaml` file on the local system.
+
+ ```command
+ vi conf.yaml
+ ```
+
+1. Add the `source` configuration, including the attribute `connection_uri`. This value specifies the location of the source database. The value of `connection_uri` must follow the format `mysql://[user]:[password]@[host]:[port]/[database]`. To access the local database, use `127.0.0.1` for the `host`. For MariaDB or MySQL, the default port is `3306`. The example in this section uses the `userid` account to access the source database `sourcedb` from the local MariaDB application. Ensure the user has been granted access to the database. Substitute their user name for `userid` and their actual password for `password`.
+
+ {{< note >}}
+ To access a PostgreSQL or MongoDB database, the syntax is similar. For PostgreSQL, the correct syntax is `connection_uri: postgres://[user]:[password]@[host]:[port]/[database]`. For MongoDB, use the format `connection_uri: mongodb://[user]:[password]@[host]:[port]/[database]`.
+ {{< /note >}}
+
+ ```file {title="conf.yaml"}
+ source:
+ connection_uri: mysql://userid:password@127.0.0.1:3306/sourcedb
+ ```
+
+1. Replibyte typically transforms at least some data fields. Inside the source section, add a `transformers` key, which accepts an array. The first value in the array uses the `database` key to indicate the database to transform. The second key is the `table` key. It contains the name of the table to modify. The following example illustrates the first section of the `transformers` key. It stipulates the transformations to apply to the `patients` table inside the `sourcedb` database.
+
+ {{< note >}}
+ The file spacing and alignment must be very precise. If the alignment of the keys is not correct, Replibyte cannot parse the file. In the following example, the keys `database` and `table` must align. It is possible to specify multiple tables using this formatting.
+ {{< /note >}}
+
+ ```file {title="conf.yaml"}
+ source:
+ ...
+ transformers:
+ - database: sourcedb
+ table: patients
+ ```
+
+1. After this section, add the `columns` information. The value of `columns` is an array of columns along with the transformation to apply to each column. The name of each column is indicated by the `name` key, while the `transformer_name` key specifies the transformer to use. The `transformer_name` must reference one of the eight transformers mentioned in the "What are the Replibyte Transformer Types?" section of this guide.
+
+ In the following example, Replibyte should apply the following transformations to the `patients` table inside the `sourcedb` database.
+ - The `first-name` transformer is applied to the `first_name` column to generate a fake yet realistic first name.
+ - The `random` transformer acts upon the `last_name` column to generate an anonymous last name.
+ - The `phone-number` transformer converts the `phone` column to a new random phone number in the correct format.
+ - The `email` transformer works on the `email` column to generate a fake email address.
+ - The `keep-first-char` transformer is used on the `unit` column. It retains the first letter of the unit, dropping the rest of the unit name.
+ - The `credit-card` alters the `credit` field, generating a series of digits in the correct credit card format.
+ - The `redacted` field modifies the `socialnum` column. It obscures most of the digits using X's.
+ - The `userid` field is left unchanged. To ensure this column is not altered, the `transient` transformer is applied.
+
+ The following example demonstrates how to add the `columns` list to the `conf.yaml` file, along with a list of column names and the applicable transformers. Note the `columns` key must align with `database` and `table`, while the `name` and `transformer_name` keys must align inside each column.
+
+ ```file {title="conf.yaml"}
+ source:
+ ...
+ transformers:
+ - database: sourcedb
+ table: patients
+ columns:
+ - name: userid
+ transformer_name: transient
+ - name: first_name
+ transformer_name: first-name
+ - name: last_name
+ transformer_name: random
+ - name: phone
+ transformer_name: phone-number
+ - name: email
+ transformer_name: email
+ - name: unit
+ transformer_name: keep-first-char
+ - name: credit
+ transformer_name: credit-card
+ - name: socialnum
+ transformer_name: redacted
+ ```
+
+1. Add a section describing the `datastore`. This is where Replibyte stores the transformed SQL dump for later use. Begin this section with the keyword `datastore`. To instruct Replibyte to store the data locally, add the `local_disk` key. The value of `local_disk` is another key-value pair. The `dir` key indicates the name of the local directory. This directory must already exist before Replibyte is used. Within the YAML file, the `datastore` keyword must align with the `source` keyword.
+
+ {{< note >}}
+ For a full explanation of the configuration required to store the datastore in a cloud computing solution, see the [Replibyte Datastore documentation](https://www.replibyte.com/docs/datastores).
+ {{< /note >}}
+
+ ```file {title="conf.yaml"}
+ source:
+ ...
+ datastore:
+ local_disk:
+ dir: /home/username/replibyte/data
+ ```
+
+1. To verify the syntax is correct, run the following command. It should list all available transformers for the selected source and should not display any errors.
+
+ ```command
+ replibyte -c conf.yaml transformer list
+ ```
+1. The entire `conf.yaml` file should appear similar to the following sample file.
+
+ ```file {title="conf.yaml"}
+ source:
+ connection_uri: mysql://userid:password@127.0.0.1:3306/sourcedb
+ transformers:
+ - database: sourcedb
+ table: patients
+ columns:
+ - name: userid
+ transformer_name: transient
+ - name: first_name
+ transformer_name: first-name
+ - name: last_name
+ transformer_name: random
+ - name: phone
+ transformer_name: phone-number
+ - name: email
+ transformer_name: email
+ - name: unit
+ transformer_name: keep-first-char
+ - name: credit
+ transformer_name: credit-card
+ - name: socialnum
+ transformer_name: redacted
+ datastore:
+ local_disk:
+ dir: /home/username/replibyte/data
+ ```
+
+## How to Use Replibyte to Create and Restore an Anonymous Database Dump
+
+To create a database dump, the `conf.yaml` file must already exist. The `source` and `datastore` components of the file must be fully defined. Additionally, the local storage directory or cloud computing location must already exist. The destination database must be of the same type as the source database. For instance, if Replibyte extracted and transformed data from a MySQL database, the destination database must also be a MySQL/MariaDB database.
+
+To create and then restore a transformed database, follow these steps.
+
+{{< note >}}
+Replibyte has several limitations. It cannot write the dump to a local database unless the database is running inside a Docker container. However, it is possible to write the database dump from the datastore to a local `dump.sql` file and then import the `sql` file into a database.
+{{< /note >}}
+
+1. To create the transformed data and save it to the datastore, use the Replibyte `dump create` command. Replibyte displays a progress bar and indicates when it has finished.
+
+ {{< note >}}
+ This guide creates the datastore from the source database. This is the most usual approach. It is also possible to base the dump database on the output of `mysqldump` or another similar command. For more information on this approach, see the [Replibyte documentation](https://www.replibyte.com/docs/guides/create-a-dump).
+ {{< /note >}}
+
+ ```command
+ replibyte -c conf.yaml dump create
+ ```
+
+ ```output
+ [00:00:00] [#####################################################################] 2.76KiB/2.76KiB (0s)
+ ```
+
+1. Confirm the `dump` and `metadata.json` files now exist inside the `datastore` target directory.
+
+ ```command
+ ls replibyte/data/
+ ```
+
+ ```output
+ dump-1677767483822 metadata.json
+ ```
+
+1. To restore a modified database, add the `destination` information to the `conf.yaml` file. Add the `connection_uri` key and value for the destination database the same way the `source` configuration is specified. The `connection_uri` for the destination also uses the format `mysql://:@:/`. The following example writes the transformed data to the `seed` MariaDB database on another system. Replace `remote_ip_addr` with the IP address of the remote system. Replace `userid` and `password` with the account details for the remote user.
+
+ ```file {title="conf.yaml"}
+ source:
+ ...
+ destination:
+ connection_uri: mysql://userid:password@remote_ip_addr:3306/seed
+ ```
+
+1. Before restoring the database contents to the destination database, use the `dump list` command to see all versions of the file inside the datastore.
+
+ ```command
+ replibyte -c conf.yaml dump list
+ ```
+
+ ```output
+ name | size | when | compressed | encrypted
+ --------------------+-----------+----------------+------------+-----------
+ dump-1677767483822 | 982 Bytes | 28 minutes ago | true | false
+ ```
+
+1. To write the contents of the datastore to a remote database, use the `dump restore` command and the `remote` option. To restore based on the latest version in the datastore, use `-v latest`.
+
+ {{< note >}}
+ Replibyte cannot write to a local database unless it is running inside a Docker container. To write the SQL dump to a local file, use the command `replibyte -c conf.yaml dump restore local -v latest -o > dump.sql`.
+ {{< /note >}}
+
+ ```command
+ replibyte -c conf.yaml dump restore remote -v latest
+ ```
+
+ ```output
+ Restore successful!
+ ```
+1. To confirm the sensitive data in the destination database is appropriately hidden, access the remote system. Launch the database, and dump the description and contents of the `patients` table. The table definition should be the same, but the contents should be transformed into unrecognizable data. As expected, the `userid` field remains unchanged.
+
+ ```command
+ SELECT * FROM patients;
+ ```
+
+ ```output
+ +----------+------------+-----------+------------+---------------------+------+------------------+-----------+
+ userid | first_name | last_name | phone | email | unit | credit | socialnum |
+ ----------+------------+-----------+------------+---------------------+------+------------------+------------+
+ | 13572468 | Stephany | 9sxCe | (761) 618- | alaina@example.net | C | 4574727772422 | 222****** |
+ | 13572469 | Ola | sREkQ | 124-880-12 | sincere@example.org | N | 4233777453337 | 343****** |
+ | 13572470 | Sister | nNo | 1-590-599- | modesto@example.net | T | 5185478832614352 | 454****** |
+ +----------+------------+-----------+------------+---------------------+------+------------------+-----------+
+ ```
+
+1. To delete a dump from the datastore, use the `dump delete` command and the name of the dump. It is also possible to keep only the ten most recent files using the command `replibyte -c conf.yaml dump delete --keep-last=10`.
+
+ ```command
+ replibyte -c conf.yaml dump delete [dumpid]
+ ```
+
+ ```output
+ Dump deleted!
+ ```
+
+## Conclusion
+
+Replibyte transforms the data in a production database to help protect sensitive data. Several databases are supported, including MySQL/MariaDB, PostgreSQL, and MongoDB. Replibyte can create random strings, redact information, and create valid fake names, emails, and credit card numbers. To install Replibyte, download the Replibyte archive from GitHub. Then create a YAML file indicating the source database and the transformations to perform. Replibyte transfers the transformed database dump to a datastore, which can be stored locally or in the cloud. Replibyte can later transfer the data from the datastore to a destination database. For more information on Replibyte, see the [Official Replibyte documentation](https://www.replibyte.com/docs/introduction).
\ No newline at end of file
diff --git a/docs/guides/databases/harperdb/create-a-harperdb-cluster/index.md b/docs/guides/databases/harperdb/create-a-harperdb-cluster/index.md
new file mode 100644
index 00000000000..07972b31ef1
--- /dev/null
+++ b/docs/guides/databases/harperdb/create-a-harperdb-cluster/index.md
@@ -0,0 +1,522 @@
+---
+slug: create-a-harperdb-cluster
+title: "How to Create a HarperDB Cluster"
+description: 'This guide explains how to configure HarperDB and how to create a multi-node cluster for data replication.'
+authors: ["Jeff Novotny"]
+contributors: ["Jeff Novotny"]
+published: 2024-05-01
+keywords: ['install HarperDB','configure HarperDB','HarperDB cluster','data replication HarperDB']
+tags: ['database', 'nosql']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[HarperDB](https://www.harperdb.io/)'
+- '[HarperDB API reference](https://api.harperdb.io/)'
+- '[HarperDB Developer Documentation](https://docs.harperdb.io/docs/)'
+- '[Installing HarperDB](https://docs.harperdb.io/docs/install-harperdb)'
+- '[HarperDB Linux install documentation](https://docs.harperdb.io/docs/install-harperdb/linux)'
+- '[Mozilla btoa documentation](https://developer.mozilla.org/en-US/docs/Web/API/btoa)'
+---
+
+[HarperDB](https://www.harperdb.io/) is a versatile database solution that combines SQL and NoSQL functionality. It includes a comprehensive built-in API for easy integration with other applications. This guide provides a brief introduction to HarperDB and explains how to install it. It also explains how to configure multiple database instances into a cluster and replicate data.
+
+## What is HarperDB?
+
+HarperDB combines a flexible database, built-in API, and distribution logic into a single backend. This solution, known as an *embedded database*, allows developers to more quickly and easily create integrated web applications. HarperDB allows both NoSQL and SQL tables to be mixed together in the same database and schema. SQL tables are highly structured and normalized, while NoSQL permits more freeform data. This combination enables access to legacy data and operational systems in the same place as new business intelligence analytics.
+
+HarperDB is available through the HarperDB Cloud or as a self-hosted solution. The optional HarperDB Studio provides a visual GUI for storing or retrieving data but requires registration. Users can configure HarperDB through either the comprehensive API or the HarperDB CLI. Unfortunately, the CLI only supports a subset of the current functionality. API calls can be embedded into an application or sent as stand-alone requests using `curl` or a similar utility.
+
+{{< note >}}
+Users can send most SQL commands to HarperDB using the API, though other query languages are preferred. Review [HarperDB SQL Guide](https://docs.harperdb.io/docs/developers/sql-guide) for more details.
+{{< /note >}}
+
+HarperDB is optimized for fast performance and scalability, with sub-millisecond latency between the API and data layer. NoSQL data can be accessed as quickly as SQL tables in traditional *relational database management systems* (RDBMS). HarperDB is particularly useful for gaming, media, manufacturing, status reporting, and real-time updates.
+
+HarperDB also supports clustering and per-table data replication. Data replication can be configured in one or both directions. Administrators have full control over how the data is replicated within the cluster. A HarperDB instance can both send table updates to a second node and receive updates from it in return. However, it can simultaneously transmit changes to a second table to another node in a unidirectional manner. HarperDB minimizes data latency between nodes, allowing clusters encompassing different regions and different continents. Clusters can grow very large, permitting virtually unlimited horizontal scaling.
+
+Some advantages of HarperDB are:
+
+- The HarperDB API provides applications with direct database access. This allows the application and its data to be bundled together in a single distribution.
+- Each HarperDB node is atomic and guarantees "exactly-once" delivery. It avoids unnecessary data duplication.
+- Every node in the cluster can read, write, and replicate data.
+- HarperDB features a fast and resilient caching mechanism.
+- Connections are self-healing, allowing for fast replication even in an unstable network.
+- HarperDB supports data streaming and *edge processing*. This technique pre-processes data, only storing or transmitting the most important information.
+- NoSQL tables support dynamic schemas, which can seamlessly change as new data arrives. HarperDB provides an auto-indexing function for more efficient hashing.
+- HarperDB allows SQL queries on both structured and unstructured data.
+- HarperDB's Custom Functions allow developers to add their own API endpoints and manage authentication and authorization.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+1. On a multi-user system, it is best to create a dedicated HarperDB user account with `sudo` access. Use this account for the instructions in this guide.
+
+{{< note >}}
+The steps in this guide are written for non-root users. Commands that require elevated privileges are prefixed with `sudo`. If you are not familiar with the `sudo` command, see the [Linux Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## How To Install HarperDB
+
+Run these instructions on every node in the cluster. Each cluster must contain at least two nodes. These guidelines are designed for Ubuntu 22.04 LTS users but are similar to other Linux distributions. HarperDB is also available as a Docker container or as a `.tgz` file for offline installation. For more details on these options and the standard installation procedure, see the [HarperDB installation instructions](https://docs.harperdb.io/docs/install-harperdb).
+
+1. Ensure the system is up to date by executing the following command:
+
+ ```command
+ sudo apt-get update -y && sudo apt-get upgrade -y
+ ```
+
+1. HarperDB requires Node.js to run properly. To install Node.js, first install the *Node Version Manager* (NVM). To download and install NVM, use the following command.
+
+ ```command
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
+ ```
+
+1. Log out and log back into the terminal to activate NVM.
+
+ ```command
+ exit
+ ssh username@system_IP
+ ```
+
+1. Use NVM to install Node.js. This command installs Node.js release 20, the current LTS release as of this writing. It also installs the NPM package manager for Node.js.
+
+ {{< note >}}
+ HarperDB requires Node.js release 14 or higher.
+ {{< /note >}}
+
+ ```command
+ nvm install 20
+ ```
+
+ NVM installs the latest Node.js 20 LTS patch version and sets that as the default.
+
+1. Create a swap file for the system.
+
+ ```command
+ sudo dd if=/dev/zero of=/swapfile bs=128M count=16
+ sudo chmod 600 /swapfile
+ sudo mkswap /swapfile
+ sudo swapon /swapfile
+ echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
+ ```
+
+1. Increase the open file limits for the account. Replace `accountname` with the name of the actual account.
+
+ ```command
+ echo "accountname soft nofile 500000" | sudo tee -a /etc/security/limits.conf
+ echo "accountname hard nofile 1000000" | sudo tee -a /etc/security/limits.conf
+ ```
+
+1. Use NPM to install HarperDB.
+
+ ```command
+ npm install -g harperdb
+ ```
+
+## How To Configure and Initialize the HarperDB Cluster
+
+This section explains the steps required to initialize and run HarperDB. It also describes the additional configuration required to create and enable a HarperDB cluster. Each cluster must contain at least two nodes.
+
+Some cluster attributes can be passed as parameters to the initial `harperdb start` command. If the system is initially configured as a stand-alone instance, it can be added to a cluster later on. However, further changes cannot be made through the command line. They must be implemented in either the `harperdb-config.yaml` file or through API calls.
+
+For simplicity and consistency, this guide appends most of the required cluster configuration to the initial `harperdb start` command. It then completes the configuration process using API calls. For more information on the HarperDB API, see the [HarperDB API documentation](https://api.harperdb.io/).
+
+{{< note >}}
+Configuration tasks can also be accomplished through the HarperDB Studio GUI. HarperDB Studio requires a HarperDB user account and registration.
+{{< /note >}}
+
+Replication occurs on a per-table basis and is configured after the schema and table are defined. See the following section for a more complete explanation. Follow these steps to enable clustering on your HarperDB nodes.
+
+1. On the first node, use `harperdb start` to launch the application. Provide the following configuration attributes in the command.
+
+ - For `TC_AGREEMENT`, indicate `yes` to accept the terms of the agreement.
+ - Define the `ROOTPATH` directory for persistent data. This example sets the directory to `/home/user/hdb`. Replace the `user` with the actual name of the user account.
+ - Set the `HDB_ADMIN_USERNAME` to the name of the administrative user.
+ - Provide a password for the administrative account in `HDB_ADMIN_PASSWORD`. Replace the `password` with a more secure password.
+ - Set `OPERATIONSAPI_NETWORK_PORT` to `9925`.
+ - Choose a name for the `CLUSTERING_USER` and provide a password for the user in `CLUSTERING_PASSWORD`. These values must be the same for all nodes in the cluster.
+ - Set `CLUSTERING_ENABLED` to `true`.
+ - Identify the node using `CLUSTERING_NODENAME`. This name must be unique within the cluster.
+
+ {{< note >}}
+ HTTPS is recommended for better security on production systems or with sensitive data. To use HTTPS, add the parameters `--OPERATIONSAPI_NETWORK_HTTPS "true"` and `--CUSTOMFUNCTIONS_NETWORK_HTTPS "true"`.
+ {{< /note >}}
+
+ ```command
+ harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/user/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password" \
+ --CLUSTERING_ENABLED "true" \
+ --CLUSTERING_USER "cluster_user" \
+ --CLUSTERING_PASSWORD "password" \
+ --CLUSTERING_NODENAME "hdb1"
+ ```
+
+ ```output
+ |------------- HarperDB 4.1.2 successfully started ------------|
+ ```
+
+1. (**Optional**) To launch HarperDB at bootup, create a crontab entry for the application. Substitute the name of the administrative account for `user` and ensure the path reflects the release of the NVM being used. In this example, the path entry reflects release `18.17` of NVM.
+
+ {{< note >}}
+ To integrate HarperDB with `systemd` and start/stop it using `systemctl`, see the [HarperDB Linux documentation](https://docs.harperdb.io/docs/install-harperdb/linux).
+ {{< /note >}}
+
+ ```command
+ (crontab -l 2>/dev/null; echo "@reboot PATH=\"/home/user/.nvm/versions/node/v18.17.0/bin:$PATH\" && harperdb start") | crontab -
+ ```
+
+1. Start HarperDB on the remaining nodes. Change the value of `CLUSTERING_NODENAME` to a different value. In this example, it is set to `hdb2`. The remaining attributes are the same as on the first node.
+
+ ```command
+ harperdb start \
+ --TC_AGREEMENT "yes" \
+ --ROOTPATH "/home/user/hdb" \
+ --OPERATIONSAPI_NETWORK_PORT "9925" \
+ --HDB_ADMIN_USERNAME "HDB_ADMIN" \
+ --HDB_ADMIN_PASSWORD "password" \
+ --CLUSTERING_ENABLED "true" \
+ --CLUSTERING_USER "cluster_user" \
+ --CLUSTERING_PASSWORD "password" \
+ --CLUSTERING_NODENAME "hdb2"
+ ```
+
+1. Run `harperdb status` on each node to confirm HarperDB is active. The `status` field should indicate `running`.
+
+ ```command
+ harperdb status
+ ```
+
+ ```output
+ harperdb:
+ status: running
+ pid: 1726
+ clustering:
+ hub server:
+ status: running
+ pid: 1698
+ leaf server:
+ status: running
+ pid: 1715
+ network:
+ - name: hdb1
+ response time: 6
+ connected nodes: []
+ routes: []
+ replication:
+ node name: hdb1
+ is enabled: true
+ connections: []
+ ```
+
+1. Determine the network topology for the cluster. A full mesh of connections is not required. Data can be replicated to any cluster node provided it is connected to the rest of the cluster. Design some measure of resiliency into the network. If a hub-and-spoke architecture is configured, the remaining nodes would be isolated if the central node suffers an outage. As a general guideline, connect each node to two other nodes. It is not necessary to add the route in both directions. For instance, a connection between `node1` and `node2` can be added to either `node1` or `node2`. Successful negotiation establishes a bidirectional route.
+
+1. Authentication is required to send messages to HarperDB using the API. To derive the `AuthorizationKey` from the name and password of the administrator account, use the JavaScript `btoa()` function. Run the command `btoa("HDB_ADMIN:password")` to convert the account credentials into a Base64 string. Replace the `password` with the actual password.
+
+ {{< note >}}
+ JavaScript commands can be executed in a web browser console. On Firefox, select **Tools->Browser Tools->Web Developer Tools** to access the console. Choose the **Console** option within the developer window, then enter the command. Alternatively, online JavaScript emulators are widely available for the same purpose. Use the result for the `AuthorizationKey` values in the following API calls. See the [Mozilla documentation](https://developer.mozilla.org/en-US/docs/Web/API/btoa) for more information.
+ {{< /note >}}
+
+ ```command
+ btoa("HDB_ADMIN:password")
+ ```
+
+1. Add routes until the network architecture is fully implemented. If a cluster consists of `node`, `node2`, and `node3`, add a route on `node1` to reach `node2` and another on `node2` to `node3`. On node `hdb1`, run the `curl` command shown below to install a route to `hdb2`. Include the following information:
+
+ * In the `POST` header, send the command to the local HarperDB process at `http://localhost:9925`.
+ * Include an `Authorization` header. Use the `AuthorizationKey` derived from the administrator account and password in the previous step.
+ * Inside the `data` header, set the `operation` to `cluster_set_routes` and set the `server` to `hub``.
+ - Use `routes` to specify a list of one or more routes to install. Each route consists of a `host` and a `port`, which is typically `9932`. The `host` is the IP address of the peer system. In the following example, replace `192.0.2.10` with the actual IP address of the peer.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "cluster_set_routes",
+ "server": "hub",
+ "routes":[ {"host": "192.0.2.10", "port": 9932} ]
+ }'
+ ```
+
+ ```output
+ {"message":"cluster routes successfully set","set":[{"host":"192.0.2.10","port":9932}],"skipped":[]}
+ ```
+
+1. Stop and start the HarperDB instance to quickly negotiate the route.
+
+ ```command
+ harperdb stop
+ harperdb start
+ ```
+
+1. Run the `harperdb status` command again. Ensure the route is displayed under `routes`.
+
+ ```command
+ harperdb status
+ ```
+
+ ```output
+ harperdb:
+ status: running
+ pid: 20926
+ clustering:
+ hub server:
+ status: running
+ pid: 20899
+ leaf server:
+ status: running
+ pid: 20914
+ network:
+ - name: hdb1
+ response time: 18
+ connected nodes:
+ - hdb2
+ routes:
+ - host: 192.0.2.10
+ port: 9932
+ - name: hdb2
+ response time: 92
+ connected nodes:
+ - hdb1
+ routes: []
+ replication:
+ node name: hdb1
+ is enabled: true
+ connections: []
+ ```
+
+## How to Add and Replicate Data on HarperDB
+
+The cluster is now ready for replication. Replication occurs on a per-table basis in HarperDB, so data is not automatically replicated. Instead, one or more subscriptions define how to manage the table data. The schema and table must be created first before adding any subscriptions. Each subscription references a single peer node. To replicate data to multiple nodes, multiple subscriptions must be added.
+
+A subscription contains the name of the `schema` and `table` to replicate, along with Boolean values for `publish` and `subscribe`. When `publish` is set to `true`, transactions on the local node are replicated to the remote node. Setting `subscribe` to `true` means any changes to the remote table are sent to the local node. Both values can be set to `true`, resulting in bidirectional replication. In all cases, the local node is the one receiving the subscription request.
+
+The following example demonstrates how to create a schema, table, and subscription on node `hdb1`. The subscription both publishes and subscribes to the `dog` table, resulting in two-way replication between nodes `hdb1` and `hdb2`.
+
+1. Create the `dev` schema on node `hdb1` through the HarperDB API using the `create_schema` operation. Provide the correct value for the `AuthorizationKey` as described earlier.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "create_schema",
+ "schema": "dev"
+ }'
+ ```
+
+ ```output
+ {"message":"schema 'dev' successfully created"}
+ ```
+
+1. Create the `dog` table within the `dev` schema. This API call invokes the `create_table` operation and sets the `hash_attribute` to `id`. This is a NoSQL table, so columns and types are not defined.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "create_table",
+ "schema": "dev",
+ "table": "dog",
+ "hash_attribute": "id"
+ }'
+ ```
+
+ ```output
+ {"message":"table 'dev.dog' successfully created."}
+ ```
+
+1. Add a subscription to the `dog` table using the API `add_node` operation. Add the following information to the request.
+
+ * Set `node_name` to `hdb2` to designate it as the peer for replication.
+ * Specify the schema and table to replicate. In this example, the `schema` is `dev` and the `table` is a `dog`.
+ * To transmit updates to `hdb2` set `publish` to `true`. This configures replication in one direction only.
+
+ {{< note >}}
+ The `add_node` operation can create multiple subscriptions for several schemas/tables at the same time. However, all subscriptions in the request must relate to the same peer. Separate each subscription using a comma and enclose it with the `[]` brackets. To replicate more tables to a different node, call the `add_node` API again and provide the new `node_name`.
+ {{< /note >}}
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "add_node",
+ "node_name": "hdb2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": false,
+ "publish": true
+ }
+ ]
+ }'
+ ```
+
+ ```output
+ {"message":"Successfully added 'hdb2' to manifest","added":[{"schema":"dev","table":"dog","publish":true,"subscribe":false}],"skipped":[]}
+ ```
+
+1. Use `harperdb status` to confirm HarperDB is aware of the subscription.
+
+ ```command
+ harperdb status
+ ```
+
+ ```output
+ ...
+ replication:
+ node name: hdb1
+ is enabled: true
+ connections:
+ - node name: hdb2
+ status: open
+ ports:
+ clustering: 9932
+ operations api: 9925
+ latency ms: 132
+ uptime: 6h 49m 43s
+ subscriptions:
+ - schema: dev
+ table: dog
+ publish: true
+ subscribe: false
+ ```
+
+1. To subscribe to updates to the `dog` table from `hdb2`, use the `update_node` operation. Set both `subscribe` and `publish` to `true` in the API call.
+
+ {{< note >}}
+ `subscribe` and `publish` could have been both set to `true` in the original `add_node` operation. This method demonstrates how to update an existing subscription. To completely remove the subscription, use the `remove_node` operation and include the name of the node under `node_name`.
+ {{< /note >}}
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "update_node",
+ "node_name": "hdb2",
+ "subscriptions": [
+ {
+ "schema": "dev",
+ "table": "dog",
+ "subscribe": true,
+ "publish": true
+ }
+ ]
+ }'
+ ```
+
+ ```output
+ {"message":"Successfully updated 'hdb2'","updated":[{"schema":"dev","table":"dog","publish":true,"subscribe":true}],"skipped":[]}
+ ```
+
+1. Add a record to the table to ensure replication is working. Either SQL or NoSQL can be used to add data to the `dog` table. This example adds a record using the NoSQL `insert` operation. Specify `dev` as the `schema` and `dog` as the table. Use the `records` attribute to add one or more entries to the table. Because NoSQL is very free-form, a variable number of key-value fields can be appended to the record. The `hash_attribute` is set to `id` in the table, so each new record must provide a unique value for the `id` field.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "insert",
+ "schema": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 1,
+ "dog_name": "Penny",
+ "age": 7,
+ "weight": 38
+ }
+ ]
+ }'
+ ```
+
+ ```output
+ {"message":"inserted 1 of 1 records","inserted_hashes":[1],"skipped_hashes":[]}
+ ```
+
+1. To confirm the record has been added, retrieve the data using an SQL query. To send an SQL query to HarperDB, specify `sql` for the `operation` and set `sql` to the desired SQL statement. The query `"SELECT * FROM dev.dog` retrieves all records from the table. The output confirms `Penny` has been added to the table.
+
+ {{< note >}}
+ NoSQL data is not normalized or columnar, so the key-value pairs do not necessarily appear in any particular order.
+ {{< /note >}}
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog"
+ }'
+ ```
+
+ ```output
+ [{"weight":38,"id":1,"dog_name":"Penny","__updatedtime__":1690742615459.453,"__createdtime__":1690742615459.453,"age":7}]
+ ```
+
+1. Change to the console of the `hdb2` node and run the same command. The output should be the same, indicating the record has been replicated to this node.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog"
+ }'
+ ```
+
+ ```output
+ [{"id":1,"age":7,"__createdtime__":1690742615459.453,"weight":38,"dog_name":"Penny","__updatedtime__":1690742615459.453}]
+ ```
+
+1. Confirm replication works in the opposite direction. Using the console for the `hdb2` node, add a second entry to the `dev.dog` table. Increment the `id` to `2` to ensure it is unique within the table.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "insert",
+ "schema": "dev",
+ "table": "dog",
+ "records": [
+ {
+ "id": 2,
+ "dog_name": "Rex",
+ "age": 2,
+ "weight": 68
+ }
+ ]
+ }'
+ ```
+
+ ```output
+ {"message":"inserted 1 of 1 records","inserted_hashes":[2],"skipped_hashes":[]}
+ ```
+
+1. Return to the first node and retrieve all records from the `dev.dog` table. The reply should now list two dogs, including the entry added on `hdb2`. This confirms data is replicating in both directions.
+
+ ```command
+ curl --location --request POST 'http://localhost:9925' \
+ --header 'Authorization: Basic AuthorizationKey' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "operation": "sql",
+ "sql": "SELECT * FROM dev.dog"
+ }'
+ ```
+
+ ```output
+ [{"weight":38,"id":1,"dog_name":"Penny","__updatedtime__":1690742615459.453,"__createdtime__":1690742615459.453,"age":7},{"weight":68,"id":2,"dog_name":"Rex","__updatedtime__":1690744053074.6084,"__createdtime__":1690744053074.6084,"age":2}]
+ ```
\ No newline at end of file
diff --git a/docs/guides/databases/surrealdb/_index.md b/docs/guides/databases/surrealdb/_index.md
new file mode 100644
index 00000000000..fb81df40678
--- /dev/null
+++ b/docs/guides/databases/surrealdb/_index.md
@@ -0,0 +1,9 @@
+---
+title: SurrealDB
+description: 'SurrealDB is a modern database designed for serverless applications.'
+authors: ["Linode"]
+contributors: ["Linode"]
+keywords: ["SurrealDB"]
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+show_in_lists: true
+---
diff --git a/docs/guides/databases/surrealdb/deploy-surrealdb-cluster/example-tikv-operator-crds.yaml b/docs/guides/databases/surrealdb/deploy-surrealdb-cluster/example-tikv-operator-crds.yaml
new file mode 100644
index 00000000000..5c6cca32aad
--- /dev/null
+++ b/docs/guides/databases/surrealdb/deploy-surrealdb-cluster/example-tikv-operator-crds.yaml
@@ -0,0 +1,251 @@
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ name: tikvclusters.tikv.org
+spec:
+ group: tikv.org
+ scope: Namespaced
+ names:
+ plural: tikvclusters
+ singular: tikvcluster
+ kind: TikvCluster
+ versions:
+ - name: v1alpha1
+ served: true
+ storage: true
+ schema:
+ openAPIV3Schema:
+ type: object
+ properties:
+ spec:
+ type: object
+ properties:
+ version:
+ type: string
+ pd:
+ type: object
+ properties:
+ baseImage:
+ type: string
+ replicas:
+ type: number
+ storageClassName:
+ type: string
+ nullable: true
+ requests:
+ type: object
+ properties:
+ storage:
+ type: string
+ config:
+ type: object
+ tikv:
+ type: object
+ properties:
+ baseImage:
+ type: string
+ replicas:
+ type: number
+ storageClassName:
+ type: string
+ nullable: true
+ requests:
+ type: object
+ properties:
+ storage:
+ type: string
+ config:
+ type: object
+ status:
+ type: object
+ properties:
+ pd:
+ type: object
+ properties:
+ synced:
+ type: boolean
+ phase:
+ type: string
+ statefulSet:
+ type: object
+ members:
+ type: object
+ additionalProperties:
+ type: object
+ properties:
+ name:
+ type: string
+ id:
+ type: string
+ clientURL:
+ type: string
+ health:
+ type: boolean
+ lastTransitionTime:
+ type: string
+ format: datetime
+ leader:
+ type: object
+ properties:
+ name:
+ type: string
+ id:
+ type: string
+ clientURL:
+ type: string
+ health:
+ type: boolean
+ lastTransitionTime:
+ type: string
+ format: datetime
+ failureMembers:
+ type: object
+ additionalProperties:
+ type: object
+ properties:
+ podName:
+ type: string
+ memberID:
+ type: string
+ pvcUID:
+ type: string
+ memberDeleted:
+ type: boolean
+ createdAt:
+ type: string
+ format: datetime
+ unjoinedMembers:
+ type: object
+ additionalProperties:
+ type: object
+ properties:
+ podName:
+ type: string
+ pvcUID:
+ type: string
+ createdAt:
+ type: string
+ format: datetime
+ image:
+ type: string
+ tikv:
+ type: object
+ properties:
+ synced:
+ type: boolean
+ phase:
+ type: string
+ statefulSet:
+ type: object
+ stores:
+ type: object
+ additionalProperties:
+ type: object
+ properties:
+ id:
+ type: string
+ podName:
+ type: string
+ ip:
+ type: string
+ leaderCount:
+ type: integer
+ format: int32
+ state:
+ type: string
+ lastHeartbeatTime:
+ type: string
+ format: datetime
+ lastTransitionTime:
+ type: string
+ format: datetime
+ tombstoneStores:
+ type: object
+ additionalProperties:
+ type: object
+ properties:
+ id:
+ type: string
+ podName:
+ type: string
+ ip:
+ type: string
+ leaderCount:
+ type: integer
+ format: int32
+ state:
+ type: string
+ lastHeartbeatTime:
+ type: string
+ format: datetime
+ lastTransitionTime:
+ type: string
+ format: datetime
+ failureStores:
+ type: object
+ additionalProperties:
+ type: object
+ properties:
+ podName:
+ type: string
+ storeID:
+ type: string
+ createdAt:
+ type: string
+ format: datetime
+ image:
+ type: string
+ conditions:
+ type: array
+ items:
+ type: object
+ properties:
+ type:
+ type: string
+ status:
+ type: string
+ lastUpdateTime:
+ type: string
+ format: datetime
+ lastTransitionTime:
+ type: string
+ format: datetime
+ reason:
+ type: string
+ message:
+ type: string
+ additionalPrinterColumns:
+ - jsonPath: .status.conditions[?(@.type=="Ready")].status
+ name: Ready
+ type: string
+ - jsonPath: .status.pd.image
+ description: The image for PD cluster
+ name: PD
+ type: string
+ - jsonPath: .spec.pd.replicas
+ description: The desired replicas number of PD cluster
+ name: PD Desire
+ type: integer
+ - jsonPath: .status.pd.statefulSet.readyReplicas
+ description: The current replicas number of PD cluster
+ name: Current
+ type: integer
+ - jsonPath: .status.tikv.image
+ description: The image for TiKV cluster
+ name: TiKV
+ type: string
+ - jsonPath: .spec.tikv.replicas
+ description: The desired replicas number of TiKV cluster
+ name: TiKV Desire
+ type: integer
+ - jsonPath: .status.tikv.statefulSet.readyReplicas
+ description: The current replicas number of TiKV cluster
+ name: Current
+ type: integer
+ - jsonPath: .metadata.creationTimestamp
+ name: Age
+ type: date
+ - jsonPath: .status.conditions[?(@.type=="Ready")].message
+ name: Status
+ priority: 1
+ type: string
diff --git a/docs/guides/databases/surrealdb/deploy-surrealdb-cluster/index.md b/docs/guides/databases/surrealdb/deploy-surrealdb-cluster/index.md
new file mode 100644
index 00000000000..e4f9416ddc6
--- /dev/null
+++ b/docs/guides/databases/surrealdb/deploy-surrealdb-cluster/index.md
@@ -0,0 +1,506 @@
+---
+slug: deploy-surrealdb-cluster
+title: "Deploying a SurrealDB Cluster on Kubernetes"
+description: "SurrealDB has been designed with distributed infrastructure in mind. With high-performance and scalable architecture, SurrealDB fits well on a Kubernetes cluster setup. In this tutorial, learn how to deploy your own SurrealDB instance with Docker and Kubernetes."
+authors: ['Nathaniel Stickman']
+contributors: ['Nathaniel Stickman']
+published: 2024-05-01
+keywords: ['surrealdb docker','surrealdb cluster','surrealdb kubernetes']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[SurrealDB Documentation](https://surrealdb.com/docs)'
+- '[SurrealDB: Run a Multi-node, Scalable Cluster with TiKV](https://surrealdb.com/docs/installation/running/tikv)'
+- '[TiKV: Using TiKV Operator](https://tikv.org/docs/4.0/tasks/try/tikv-operator/)'
+---
+
+SurrealDB is a powerful alternative to traditional relational databases. SurrealDB has been designed to operate effectively in horizontally-scaling distributed environments and is capable of inter-document relations and can operate as a backend for your serverless web applications. This tutorial walks through setting up a SurrealDB instance in a distributed environment using Kubernetes for cluster deployment and TiKV for clustered persistence.
+
+## How to Deploy a SurrealDB Cluster
+
+There are numerous ways to go about setting up a distributed architecture with SurrealDB, but using a Kubernetes cluster is probably the most approachable. This allows you to leverage all the tooling and community support of Kubernetes to fine-tune this setup for your needs.
+
+Typically, SurrealDB uses either in-memory or in-file storage, but SurrealDB also supports a few options for clustered persistence. One of the most well-supported is [TiKV](https://tikv.org/). Follow along to see it employed here for the example distributed SurrealDB servers.
+
+### Provisioning the Kubernetes Cluster
+
+To get started, you need to have a Kubernetes cluster up and running, along with `kubectl` or a similar tool set up to manage the cluster. This tutorial provides commands specifically for `kubectl`.
+
+Linode offers the Linode Kubernetes Engine (LKE) as a convenient way to get started. You can deploy a cluster directly from the Cloud Manager. Our guide [Linode Kubernetes Engine - Getting Started](/docs/products/compute/kubernetes/get-started/) includes steps for deploying a new cluster and setting up a `kubectl` instance to manage it.
+
+The rest of this tutorial assumes you have a Kubernetes cluster up and running and configured for management with a local `kubectl` instance. The examples in this tutorial also assume that your cluster has three nodes, so adjust accordingly throughout if your cluster differs.
+
+Additionally, you need to have [Helm](https://helm.sh/) installed to follow along with this tutorial. Helm is used here to deploy TiKV to the cluster. To install Helm, follow the relevant section of our guide [Installing Apps on Kubernetes with Helm 3](/docs/guides/how-to-install-apps-on-kubernetes-with-helm-3/#install-helm).
+
+Also, you must install SurrealDB on your local machine to run `surreal` commands on the Kubernetes cluster via port forwarding. Follow the steps in the relevant section of our guide [Getting Started with SurrealDB](/docs/guides/getting-started-with-surrealdb/#how-to-install-surrealdb).
+
+### Deploying TiKV for Persistence
+
+Before deploying SurrealDB to the cluster, you should have both TiKV and the TiKV Operator up and running. As described above, TiKV provides coordinated persistent storage across the cluster. SurrealDB in turn leverages that coordinated storage to keep its distributed instances in sync.
+
+1. Create a Kubernetes manifest for deploying the TiKV Operator CRDs:
+
+ ```command
+ nano tikv-operator-crds.yaml
+ ```
+
+1. Give it the contents shown in our [example manifest](example-tikv-operator-crds.yaml) for a basic TiKV configuration that works on the latest versions of Kubernetes.
+
+ {{< note >}}
+ TiKV's official [beta manifest](https://raw.githubusercontent.com/tikv/tikv-operator/master/manifests/crd.v1beta1.yaml) requires older versions of Kubernetes (1.16 and older). Our example updates that for newer versions of Kubernetes based on TiKV [developer commentary](https://github.com/tikv/tikv-operator/issues/15).
+ {{< /note >}}
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Use the manifest above to install the TiKV Operator CRDs to your cluster. The example command here assumes you named the manifest `tikv-operator-crds.yaml`.
+
+ ```command
+ kubectl apply -f tikv-operator-crds.yaml
+ ```
+
+ ```output
+ customresourcedefinition.apiextensions.k8s.io/tikvclusters.tikv.org created
+ ```
+
+1. Install the TiKV Operator itself via Helm. This involves adding the necessary Helm repository and creating the TiKV Operator namespace before finally installing the operator.
+
+ ```command
+ helm repo add pingcap https://charts.pingcap.org/
+ kubectl create ns tikv-operator
+ helm install --namespace tikv-operator tikv-operator pingcap/tikv-operator --version v0.1.0
+ ```
+
+ ```output
+ NAME: tikv-operator
+ LAST DEPLOYED: Wed Jul 5 13:58:31 2023
+ NAMESPACE: tikv-operator
+ STATUS: deployed
+ REVISION: 1
+ TEST SUITE: None
+ ```
+
+1. Confirm deployment of the operator by checking your Kubernetes pods in the operator's namespace:
+
+ ```command
+ kubectl --namespace tikv-operator get pods
+ ```
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ tikv-operator-64d75bc9c8-j74b6 1/1 Running 0 28s
+ ```
+
+1. Create another Kubernetes manifest file for the TiKV cluster itself:
+
+ ```command
+ nano tikv-cluster.yaml
+ ```
+
+1. The contents for this file are based on TiKV's [basic cluster example](https://raw.githubusercontent.com/tikv/tikv-operator/master/examples/basic/tikv-cluster.yaml). This example is intentionally minimal, aside from the three replicas matching the Kubernetes cluster size, so modify the values here as needed:
+
+ ```file {title="tikv-cluster.yaml" lang="yaml"}
+ apiVersion: tikv.org/v1alpha1
+ kind: TikvCluster
+ metadata:
+ name: tikv-cluster
+ spec:
+ version: v4.0.0
+ pd:
+ baseImage: pingcap/pd
+ replicas: 3
+ requests:
+ storage: "1Gi"
+ config: {}
+ tikv:
+ baseImage: pingcap/tikv
+ replicas: 3
+ requests:
+ storage: "1Gi"
+ config: {}
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Use the manifest to install TiKV to your Kubernetes cluster:
+
+ ```command
+ kubectl apply -f tikv-cluster.yaml
+ ```
+
+ ```output
+ tikvcluster.tikv.org/tikv-cluster created
+ ```
+
+1. Verify the deployment of the TiKV cluster:
+
+ ```command
+ kubectl get pods
+ ```
+
+ It may take a while before all of the pods are running:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ tikv-cluster-discovery-6955b6d594-5xmwr 1/1 Running 0 96s
+ tikv-cluster-pd-0 1/1 Running 0 95s
+ tikv-cluster-pd-1 1/1 Running 0 95s
+ tikv-cluster-pd-2 1/1 Running 0 95s
+ tikv-cluster-tikv-0 1/1 Running 0 43s
+ tikv-cluster-tikv-1 1/1 Running 0 43s
+ tikv-cluster-tikv-2 1/1 Running 0 43s
+ ```
+
+### Deploying SurrealDB to the Cluster
+
+With a Kubernetes cluster running TiKV, you are now ready to deploy distributed SurrealDB instances. The process is relatively straightforward. It simply requires a Kubernetes manifest that pulls the SurrealDB image, then starts up SurrealDB with TiKV as the storage option.
+
+1. Create another Kubernetes manifest for deploying the SurrealDB instances, along with a service for routing connections to them:
+
+ ```command
+ nano surreal-manifest.yaml
+ ```
+
+1. The example here deploys three replicas of a basic SurrealDB server:
+
+ ```file {title="surreal-manifest.yaml" lang="yaml"}
+ ---
+ apiVersion: v1
+ kind: Service
+ metadata:
+ labels:
+ app: surreal
+ name: surreal-service
+ spec:
+ ports:
+ - name: "surreal-port"
+ port: 8000
+ targetPort: 8000
+ selector:
+ app: surreal
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ app: surreal
+ name: surreal-deployment
+ spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: surreal
+ template:
+ metadata:
+ labels:
+ app: surreal
+ spec:
+ containers:
+ - args:
+ - start
+ - --user=root
+ - --pass=exampleRootPass
+ - tikv://tikv-cluster-pd:2379
+ image: surrealdb/surrealdb:latest
+ name: surrealdb
+ ports:
+ - containerPort: 8000
+ ```
+
+ Notice also that the command used to start up each SurrealDB server uses the `tikv://` protocol. This directs the servers to use TiKV for database persistence. The TiKV URL here, `tikv-cluster-pd`, points to the TiKV service exposed within the Kubernetes cluster.
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Deploy the SurrealDB manifest to the Kubernetes cluster:
+
+ ```command
+ kubectl apply -f surreal-manifest.yaml
+ ```
+
+ ```output
+ service/surreal-service created
+ deployment.apps/surreal-deployment created
+ ```
+
+1. Verify that the SurrealDB instances are running by checking the list of pods:
+
+ ```command
+ kubectl get pods
+ ```
+
+ Again, this may take a while for all pods to report as `Running`:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ surreal-deployment-59fcf5cd9b-hqz5g 1/1 Running 0 7s
+ surreal-deployment-59fcf5cd9b-mlfbl 1/1 Running 0 7s
+ surreal-deployment-59fcf5cd9b-rmb8j 1/1 Running 0 7s
+ tikv-cluster-discovery-6955b6d594-5xmwr 1/1 Running 0 50m
+ tikv-cluster-pd-0 1/1 Running 1 (49m ago) 50m
+ tikv-cluster-pd-1 1/1 Running 0 50m
+ tikv-cluster-pd-2 1/1 Running 0 50m
+ tikv-cluster-tikv-0 1/1 Running 0 49m
+ tikv-cluster-tikv-1 1/1 Running 0 49m
+ tikv-cluster-tikv-2 1/1 Running 0 49m
+ ```
+
+1. As a test, pick one of these pods to verify that SurrealDB has started up and successfully connected to the TiKV service:
+
+ ```command
+ kubectl logs surreal-deployment-59fcf5cd9b-hqz5g
+ ```
+
+ ```output
+ .d8888b. 888 8888888b. 888888b.
+ d88P Y88b 888 888 'Y88b 888 '88b
+ Y88b. 888 888 888 888 .88P
+ 'Y888b. 888 888 888d888 888d888 .d88b. 8888b. 888 888 888 8888888K.
+ 'Y88b. 888 888 888P' 888P' d8P Y8b '88b 888 888 888 888 'Y88b
+ '888 888 888 888 888 88888888 .d888888 888 888 888 888 888
+ Y88b d88P Y88b 888 888 888 Y8b. 888 888 888 888 .d88P 888 d88P
+ 'Y8888P' 'Y88888 888 888 'Y8888 'Y888888 888 8888888P' 8888888P'
+
+
+ 2023-05-30T18:43:00.869343Z INFO surrealdb::env: Running 1.0.0-beta.9+20230402.5eafebd for linux on x86_64
+ 2023-05-30T18:43:00.869368Z INFO surrealdb::iam: Root authentication is enabled
+ 2023-05-30T18:43:00.869371Z INFO surrealdb::iam: Root username is 'root'
+ 2023-05-30T18:43:00.869371Z INFO surrealdb::dbs: Database strict mode is disabled
+ 2023-05-30T18:43:00.869377Z INFO surrealdb::kvs: Connecting to kvs store at tikv://tikv-cluster-pd:2379
+ 2023-05-30T18:43:00.881033Z INFO surrealdb::kvs: Connected to kvs store at tikv://tikv-cluster-pd:2379
+ 2023-05-30T18:43:00.881319Z INFO surrealdb::net: Starting web server on 0.0.0.0:8000
+ 2023-05-30T18:43:00.881444Z INFO surrealdb::net: Started web server on 0.0.0.0:8000
+ ```
+
+## How to Access the SurrealDB Cluster
+
+Your Kubernetes cluster now has an operational and distributed set of SurrealDB instances running. From here you are ready to start working with the SurrealDB cluster.
+
+First, you need to read the steps for accessing your SurrealDB cluster. These are a little different than when running a single SurrealDB server.
+
+Additionally, there are another set of steps to start securing your SurrealDB cluster. These steps are detailed in another guide linked further below, but excerpts are provided in this tutorial to get you started.
+
+### Accessing SurrealDB
+
+The SurrealDB cluster is now running on the Kubernetes network. You can use port forwarding through `kubectl` to access it.
+
+Using the steps above, your SurrealDB cluster deployment includes a Kubernetes service. That service provides a practical way to access the cluster without having to specify a particular instance.
+
+Follow along to access the SurrealDB Kubernetes service using port forwarding, then try out your first query on the SurrealDB cluster.
+
+1. Use the `kubectl` `port-forward` command to forward the SurrealDB port (`8000`) from the SurrealDB Kubernetes service to your local machine:
+
+ ```command
+ kubectl port-forward svc/surreal-service 8000:8000
+ ```
+
+ ```output
+ Forwarding from 127.0.0.1:8000 -> 8000
+ Forwarding from [::1]:8000 -> 8000
+ ```
+
+1. This makes the SurrealDB cluster accessible. Test it out by sending a query to the cluster using an HTTP request in a second terminal.
+
+ The example here uses cURL to send the request for information on the `exampleDb` database in the `exampleNs` SurrealDB namespace. This example assumes you used the example root user credentials given in the SurrealDB example manifest further above.
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" -H "NS: exampleNs" -H "DB: exampleDb" --user "root:exampleRootPass" --data "INFO FOR DB;" http://localhost:8000/sql | jq
+ ```
+
+ ```output
+ [
+ {
+ "time": "213.507212ms",
+ "status": "OK",
+ "result": {
+ "dl": {},
+ "dt": {},
+ "fc": {},
+ "pa": {},
+ "sc": {},
+ "tb": {}
+ }
+ }
+ ]
+ ```
+
+ The output is largely empty because the database has not yet been populated. Nevertheless, the structure of the response indicates successful connection to the SurrealDB server.
+
+### Completing the SurrealDB Setup
+
+The SurrealDB cluster is now operable. You can use the port forwarding feature and set up traffic to the cluster to best fit your needs. However, it's likely you want to secure your SurrealDB cluster before taking it to production.
+
+Find thorough coverage of how to secure SurrealDB and manage user access in our guide [Managing Security and Access Control for SurrealDB](/docs/guides/managing-security-and-access-for-surrealdb/).
+
+Below are some steps to get you started and demonstrate how these configurations might be applied to your SurrealDB cluster. These steps set up a limited SurrealDB user and disable root access on your SurrealDB servers.
+
+1. Start up a SurrealDB CLI session and connect to your cluster using the root user's credentials. This assumes that port forwarding is still operating as in the previous section. Additionally, this example command uses the example root user credentials included in the manifest earlier in the tutorial.
+
+ ```command {title="Terminal #2"}
+ surreal sql --conn http://localhost:8000 --user root --pass exampleRootPass --ns exampleNs --db exampleDb --pretty
+ ```
+
+1. Execute a `DEFINE LOGIN` SurrealQL command to create a new SurrealDB user. This example creates an `exampleUser` at the database level, meaning that the user's access is limited to the current database (`exampleDb`).
+
+ ```command {title="Terminal #2"}
+ DEFINE LOGIN exampleUser ON DATABASE PASSWORD 'examplePass';
+ ```
+
+1. When done creating users, exit the SurrealDB CLI using the Ctrl + C keyboard combination. Use the same keyboard combination to also stop the `kubectl` port forwarding.
+
+1. In the original terminal, open the SurrealDB Kubernetes manifest created earlier:
+
+ ```command {title="Terminal #1"}
+ nano surreal-manifest.yaml
+ ```
+
+1. Remove the `--user` and `--pass` lines from the `containers` block as shown here:
+
+ ```file
+ # [...]
+ containers:
+ - args:
+ - start
+ - tikv://tikv-cluster-pd:2379
+ image: surrealdb/surrealdb:latest
+ name: surrealdb
+ ports:
+ - containerPort: 8000
+ ```
+
+ Removing the root credentials from the SurrealDB start up disables root access on the instances.
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Reapply the SurrealDB manifest:
+
+ ```command {title="Terminal #1"}
+ kubectl apply -f surreal-manifest.yaml
+ ```
+
+ ```output
+ service/surreal-service unchanged
+ deployment.apps/surreal-deployment configured
+ ```
+
+1. Verify success by using `kubectl` to display the IDs of the updated pods:
+
+ ```command {title="Terminal #1"}
+ kubectl get pods
+ ```
+
+ ```output
+ surreal-deployment-59fcf5cd9b-cs8sj 1/1 Running 0 118s
+ surreal-deployment-59fcf5cd9b-g569w 1/1 Running 0 2m
+ surreal-deployment-59fcf5cd9b-xwkzf 1/1 Running 0 2m1s
+ tikv-cluster-discovery-6955b6d594-txlft 1/1 Running 0 104m
+ tikv-cluster-pd-0 1/1 Running 0 104m
+ tikv-cluster-pd-1 1/1 Running 0 104m
+ tikv-cluster-pd-2 1/1 Running 0 104m
+ tikv-cluster-tikv-0 1/1 Running 0 103m
+ tikv-cluster-tikv-1 1/1 Running 0 103m
+ tikv-cluster-tikv-2 1/1 Running 0 103m
+ ```
+
+1. Use one of these IDs to see the SurrealDB logs, for example:
+
+ ```command {title="Terminal #1"}
+ kubectl logs surreal-deployment-59fcf5cd9b-cs8sj
+ ```
+
+ Notice that the output shows that `Root authentication is disabled`:
+
+ ```output
+ .d8888b. 888 8888888b. 888888b.
+ d88P Y88b 888 888 'Y88b 888 '88b
+ Y88b. 888 888 888 888 .88P
+ 'Y888b. 888 888 888d888 888d888 .d88b. 8888b. 888 888 888 8888888K.
+ 'Y88b. 888 888 888P' 888P' d8P Y8b '88b 888 888 888 888 'Y88b
+ '888 888 888 888 888 88888888 .d888888 888 888 888 888 888
+ Y88b d88P Y88b 888 888 888 Y8b. 888 888 888 888 .d88P 888 d88P
+ 'Y8888P' 'Y88888 888 888 'Y8888 'Y888888 888 8888888P' 8888888P'
+
+
+ 2023-05-30T22:33:15.571269Z INFO surrealdb::env: Running 1.0.0-beta.9+20230402.5eafebd for linux on x86_64
+ 2023-05-30T22:33:15.571291Z INFO surrealdb::iam: Root authentication is disabled
+ 2023-05-30T22:33:15.571294Z INFO surrealdb::dbs: Database strict mode is disabled
+ 2023-05-30T22:33:15.571301Z INFO surrealdb::kvs: Connecting to kvs store at tikv://tikv-cluster-pd:2379
+ 2023-05-30T22:33:15.698382Z INFO surrealdb::kvs: Connected to kvs store at tikv://tikv-cluster-pd:2379
+ 2023-05-30T22:33:15.698474Z INFO surrealdb::net: Starting web server on 0.0.0.0:8000
+ 2023-05-30T22:33:15.698525Z INFO surrealdb::net: Started web server on 0.0.0.0:8000
+ ```
+
+1. Now verify the limited access. First, start up port forwarding as shown earlier:
+
+ ```command {title="Terminal #1"}
+ kubectl port-forward svc/surreal-service 8000:8000
+ ```
+
+1. Move back to the second terminal and try to access database information using the previous root user credentials as shown further above in this tutorial:
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" -H "NS: exampleNs" -H "DB: exampleDb" --user "root:exampleRootPass" --data "INFO FOR DB;" http://localhost:8000/sql | jq
+ ```
+
+ The authentication fails, verifying that the root user is no longer operative:
+
+ ```output
+ {
+ "code": 403,
+ "details": "Authentication failed",
+ "description": "Your authentication details are invalid. Reauthenticate using valid authentication parameters.",
+ "information": "There was a problem with authentication"
+ }
+ ```
+
+1. Now attempt the same query but using the credentials for the limited user created above:
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" -H "NS: exampleNs" -H "DB: exampleDb" --user "exampleUser:examplePass" --data "INFO FOR DB;" http://localhost:8000/sql | jq
+ ```
+
+ ```output
+ [
+ {
+ "time": "7.02541ms",
+ "status": "OK",
+ "result": {
+ "dl": {
+ "exampleUser": "DEFINE LOGIN exampleUser ON DATABASE PASSHASH '$argon2id$v=19$m=19456,t=2,p=1$JijmKQBeUrhqar0iPHIFiA$eI13ZZh1Gdv0DsetObxrxOeWMFQq34T5/mz3enFSu4M'"
+ },
+ "dt": {},
+ "fc": {},
+ "pa": {},
+ "sc": {},
+ "tb": {}
+ }
+ }
+ ]
+ ```
+
+1. Finally, check to see that the user is limited to the database by trying to get information about the namespace:
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" -H "NS: exampleNs" -H "DB: exampleDb" --user "exampleUser:examplePass" --data "INFO FOR NS;" http://localhost:8000/sql | jq
+ ```
+
+ Recall that the `DEFINE LOGIN` statement above gave the user a database role, not a namespace role:
+
+ ```output
+ [
+ {
+ "time": "345.696µs",
+ "status": "ERR",
+ "detail": "You don't have permission to perform this query type"
+ }
+ ]
+ ```
+
+## Conclusion
+
+You are now ready to operate a distributed SurrealDB cluster. With the powerful possibilities of SurrealDB and the scalability of its distributed architecture, you can adapt with your applications' needs.
+
+As the beginning of this tutorial indicated, SurrealDB can fit a range of use cases. You can use it like a traditional database, taking advantage of its distributed architecture and inter-document relations. Alternatively, you can use it as a full backend for serverless web applications.
+
+To keep learning about SurrealDB, and to get more ideas for using it with your applications, take a look at our other SurrealDB guides:
+
+- [Building an Web Application on Top of SurrealDB](/docs/guides/surrealdb-for-web-applications)
+
+- [Modeling Data with SurrealDB’s Inter-document Relations](/docs/guides/surrealdb-interdocument-modeling)
\ No newline at end of file
diff --git a/docs/guides/databases/surrealdb/getting-started-with-surrealdb/index.md b/docs/guides/databases/surrealdb/getting-started-with-surrealdb/index.md
new file mode 100644
index 00000000000..fedae84bd97
--- /dev/null
+++ b/docs/guides/databases/surrealdb/getting-started-with-surrealdb/index.md
@@ -0,0 +1,436 @@
+---
+slug: getting-started-with-surrealdb
+title: "Getting Started with SurrealDB"
+description: "SurrealDB brings new features to the relational database model, with an emphasis on supporting serverless applications and distributed infrastructures. Learn about SurrealDB and how to get started using it in this tutorial."
+authors: ['Nathaniel Stickman']
+contributors: ['Nathaniel Stickman']
+published: 2024-05-01
+keywords: ['surrealdb examples','surrealdb performance','surrealdb authentication']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[SurrealDB Documentation](https://surrealdb.com/docs)'
+---
+
+[SurrealDB](https://surrealdb.com/) offers a new approach to relational databases. It brings features like all-in-one handling of database, API, and security layers, real-time queries, and multi-model data storage --- while still retaining a familiar SQL-like language. In this tutorial, learn more about SurrealDB's offerings and how you can get started with this new database solution.
+
+## Why SurrealDB?
+
+SurrealDB serves as a complete database solution for serverless applications and for use cases that require a high degree of scalability. For serverless applications, SurrealDB's all-in-one schema handling lets you design APIs right alongside your databases. Web clients can then readily access those APIs, letting SurrealDB support Jamstack and other serverless web applications. SurrealDB has also been designed with scalability in mind. It is built on Rust, giving it high performance. SurrealDB's architecture and database handling places distributed systems at the forefront, making it ready for horizontal scaling.
+
+Though not complete, here is a list of some of the features that SurrealDB offers:
+
+- **Handling of database, API, and security layers all in one place.** SurrealDB does not require separate server-side applications to schematize and expose a client-facing API. You can do that from right within SurrealDB, providing support to serverless applications like those using the Jamstack architecture. Moreover, SurrealDB includes a robust access-control system. This further reduces the need to implement separate server-side tools and development.
+
+- **Implements a multi-model approach.** SurrealDB retains the familiarity of SQL queries. At the same time, SurrealDB's queries can leverage inter-document relations and support multiple models. You can store data however you like, and retrieve it however you need. SurrealDB does not limit your models on either end, and does not require you to finalize your models in advance.
+
+- **Supports real-time queries.** SurrealDB can keep clients up-to-date with live push updates for changes in data. Clients can subscribe to queries, and SurrealDB leverages advanced filtering options to fine-tune what kinds of changes clients get live push updates for.
+
+- **Uses a highly-scalable architecture.** SurrealDB is designed to support running databases on distributed clusters, making it easily scalable. SurrealDB's database operations are specially built to handle distributed operations without table or row locks.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+{{< note >}}
+This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/guides/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## How to Install SurrealDB
+
+To start with SurrealDB, install a standalone instance on your system. SurrealDB provides an installation script that makes the process straightforward. Follow along with the steps below to install the SurrealDB server, then keep reading to learn how to use it.
+
+{{< note >}}
+SurrealDB can also be [run as a Docker container](https://surrealdb.com/docs/installation/running/docker). This tutorial, however, focuses on a full installation of SurrealDB to provide a more versatile installation to work with.
+{{< /note >}}
+
+1. Install Tar to extract `tar` packages as this is required for part of the installation script's process:
+
+ {{< tabs >}}
+ {{< tab "Debian-based" >}}
+ ```command
+ sudo apt install tar
+ ```
+ {{< /tab >}}
+ {{< tab "RHEL-based" >}}
+ ```command
+ sudo dnf install tar
+ ```{{< /tab >}}
+ {{< /tabs >}}
+
+1. While optional, also install the `jq` tool to more legibly print cURL outputs later on in the tutorial:
+
+ {{< tabs >}}
+ {{< tab "Debian-based" >}}
+ ```command
+ sudo apt install jq
+ ```
+ {{< /tab >}}
+ {{< tab "RHEL-based" >}}
+ ```command
+ sudo dnf install jq
+ ```{{< /tab >}}
+ {{< /tabs >}}
+
+1. Run the installation script. cURL accesses the script via its URL, the script's contents are then piped to your shell for execution.
+
+ ```command
+ curl -sSf https://install.surrealdb.com | sh
+ ```
+
+ ```output
+ .d8888b. 888 8888888b. 888888b.
+ d88P Y88b 888 888 'Y88b 888 '88b
+ Y88b. 888 888 888 888 .88P
+ 'Y888b. 888 888 888d888 888d888 .d88b. 8888b. 888 888 888 8888888K.
+ 'Y88b. 888 888 888P' 888P' d8P Y8b '88b 888 888 888 888 'Y88b
+ '888 888 888 888 888 88888888 .d888888 888 888 888 888 888
+ Y88b d88P Y88b 888 888 888 Y8b. 888 888 888 888 .d88P 888 d88P
+ 'Y8888P' 'Y88888 888 888 'Y8888 'Y888888 888 8888888P' 8888888P'
+
+ Fetching the latest database version...
+ Fetching the host system architecture...
+
+ [...]
+
+ SurrealDB successfully installed in:
+ /home/example-user/.surrealdb/surreal
+
+ [...]
+ ```
+
+1. By default, the SurrealDB binary file is stored at `~/.surrealdb/surreal`. For easier access to the `surreal` command, move the binary to a directory on your shell path. Make sure to change `example-user` to your actual username.
+
+ ```command
+ sudo mv /home/example-user/.surrealdb/surreal /usr/local/bin
+ ```
+
+ Leave the `~/.surrealdb/` directory in place. The next section showcases SurrealDB's option to persist a database to a file, and the directory provides a convenient location.
+
+## SurrealDB: The Basics
+
+### Running the SurrealDB Server
+
+To begin using SurrealDB, you must first start the database server. You can do this from the `surreal` binary's `start` command. However, before starting the server, you need to decide how to store data: in memory or in a file.
+
+Below are two versions of a basic command for starting up the SurrealDB server, one for each of the kinds of storage options:
+
+- **Memory**: To run SurrealDB using in-memory database storage, end the `start` command with `memory`. This option is exceptional for testing out SurrealDB, allowing you to get a feel for queries without committing to persistent data.
+
+ ```command
+ surreal start --user root --pass exampleRootPass memory
+ ```
+
+- **File**: To run SurrealDB using a database file for storage, end the `start` command with `file://`, where `` points to a `.db` file. The example here points to a `exampleDatabase.db` file (which does not yet exist) stored in the current user's (`example-user`) home directory. This uses the absolute path, so it begins with a `/`. Make sure to change `example-user` to your actual username.
+
+ ```command
+ surreal start --user root --pass exampleRootPass file:///home/example-user/.surrealdb/exampleDatabase.db
+ ```
+
+```output
+[...]
+2022-12-31T22:23:24.252627Z INFO surrealdb::net: Starting web server on 0.0.0.0:8000
+2022-12-31T22:23:25.262728Z INFO surrealdb::net: Started web server on 0.0.0.0:8000
+[...]
+```
+
+Notice that both of the commands have `--user` and `--pass` options. These define the root user credentials for your server instance, which you can use for queries in later examples.
+
+Before moving ahead with SurrealDB, you likely want to implement stricter security around this user, and to create other users with managed access. If so, check out our tutorial [Managing Security and Access Control for SurrealDB](/docs/guides/managing-security-and-access-for-surrealdb/).
+
+#### Running on a Different Port
+
+By default, SurrealDB runs on port `8000`. You can alter the port with the `--bind` option. This example runs the SurrealDB server on port `8080`:
+
+```command
+surreal start --bind 0.0.0.0:8080 --user root --pass exampleRootPass memory
+```
+
+The `--bind` option also lets you alter the address at which the SurrealDB server can be accessed. By default, the address is `0.0.0.0` as above. The server can thus be accessed from any address that accesses the server machine.
+
+The examples in this tutorial only need to access the SurrealDB server over `localhost` (`127.0.0.1`). It's good practice to only run the server there for testing purposes:
+
+```command
+surreal start --bind 127.0.0.1:8000 --user root --pass exampleRootPass memory
+```
+
+### Querying SurrealDB from the CLI
+
+In addition to the server, the `surreal` binary includes an `sql` command to run SurrealDB's CLI tool. This provides easy access to the SurrealDB server and is probably the best way to learn SurrealDB queries.
+
+1. First, open a second terminal for the SurrealDB CLI, as the original is still running the SurrealDB server. The rest of the commands in this tutorial are run from this second terminal.
+
+1. To start a SurrealDB CLI session, use a command like the one below. This connects to a SurrealDB server started with one of the examples commands above (except for the one that changes the default port).
+
+ ```command
+ surreal sql --conn http://localhost:8000 --user root --pass exampleRootPass --ns exampleNs --db exampleDb --pretty
+ ```
+
+ The command specifies a connection to a SurrealDB server running on `localhost`, and connects using the `root` user and the example password from above. Additionally, for this tutorial's purposes, the connection to the database server immediately initiates a `exampleNs` namespace and a `exampleDb` database.
+
+ The example CLI startup here also includes the `--pretty` option. This "pretty prints" responses from the server, making them easier to read and navigate.
+
+ From there, you can start executing queries on your SurrealDB database. These steps provide some examples that set up a set of tables and records. These do a lot to demonstrate some of SurrealDB's unique features.
+
+1. Create a `tags` table to store tags for each blog post and provide a few starting values:
+
+ ```command
+ INSERT INTO tags [
+ { id: "first", value: "first" },
+ { id: "last", value: "last" },
+ { id: "post", value: "post" },
+ { id: "test", value: "test" }
+ ];
+ ```
+
+1. SurrealDB automatically generates unique IDs, but entering these manually makes the records more intuitive to fetch. For instance, you can fetch the `last` tag above with the following command:
+
+ ```command
+ SELECT value FROM tags:last;
+ ```
+
+ ```output
+ {
+ value: 'last'
+ }
+ ```
+
+ SurrealDB IDs consist of the record's ID, `last` in this case, and the table name, hence `tags:last`.
+
+1. Create a set of blog posts in an `article` table. These consist of defined IDs, titles, and body content. Date values are included as well for easily sorting the records later.
+
+ ```command
+ INSERT INTO article [
+ { id: "first", date: "2023-01-01T12:01:01Z", title: "First Post", body: "This is the first post." },
+ { id: "second", date: "2023-02-01T13:02:02Z", title: "Second Post", body: "You are reading the second post." },
+ { id: "third", date: "2023-03-01T14:03:03Z", title: "Third Post", body: "Here, the contents for the third post." }
+ ];
+ ```
+
+ For those familiar with traditional SQL, the two `INSERT` statements above may seem unusual. SurrealDB supports the traditional `INSERT` syntax, but the syntax in this version condenses the command and is more aligned with "document" database work.
+
+1. Create some relations between the `article` records and `tags` records. This provides a nice way to "tag" the posts while also demonstrating how SurrealDB's `RELATE` statement can be used for managing inter-document relations.
+
+ ```command
+ RELATE article:first->tagged->tags:post;
+ RELATE article:first->tagged->tags:first;
+ RELATE article:first->tagged->tags:test;
+
+ RELATE article:second->tagged->tags:post;
+ RELATE article:second->tagged->tags:test;
+
+ RELATE article:third->tagged->tags:post;
+ RELATE article:third->tagged->tags:last;
+ RELATE article:third->tagged->tags:test;
+ ```
+
+ To elaborate, the commands above result in the `first` article being associated with the `post`, `first`, and `test` tags. See how to create fresh and useful models from these relations later on.
+
+1. Now take a look at what kinds of records your database has. To start, get all the records from the `article` base table:
+
+ ```command
+ SELECT id, title, body FROM article;
+ ```
+
+ ```output
+ [
+ {
+ body: 'This is the first post.',
+ id: article:first,
+ title: 'First Post'
+ },
+ {
+ body: 'You are reading the second post.',
+ id: article:second,
+ title: 'Second Post'
+ },
+ {
+ body: 'Here, the contents for the third post.',
+ id: article:third,
+ title: 'Third Post'
+ }
+ ]
+ ```
+
+1. Now do the same for `tags`:
+
+ ```command
+ SELECT id, value FROM tags;
+ ```
+
+ ```output
+ [
+ {
+ id: tags:first,
+ value: 'first'
+ },
+ {
+ id: tags:last,
+ value: 'last'
+ },
+ {
+ id: tags:post,
+ value: 'post'
+ },
+ {
+ id: tags:test,
+ value: 'test'
+ }
+ ]
+ ```
+
+1. One of the key features of this setup is being able to relate article records to tags. This next query essentially uses the `tagged` relations to create a new model, one that should prove much more useful for rendering the blog:
+
+ ```command
+ SELECT id, title, body, ->tagged->tags.value AS tags, date FROM article ORDER BY date;
+ ```
+
+ ``` output
+ [
+ {
+ body: 'This is the first post.',
+ date: '2023-01-01T12:01:01Z',
+ id: article:first,
+ tags: [
+ 'post',
+ 'test',
+ 'first'
+ ],
+ title: 'First Post'
+ },
+ {
+ body: 'You are reading the second post.',
+ date: '2023-02-01T13:02:02Z',
+ id: article:second,
+ tags: [
+ 'post',
+ 'test'
+ ],
+ title: 'Second Post'
+ },
+ {
+ body: 'Here, the contents for the third post.',
+ date: '2023-03-01T14:03:03Z',
+ id: article:third,
+ tags: [
+ 'test',
+ 'last',
+ 'post'
+ ],
+ title: 'Third Post'
+ }
+ ]
+ ```
+
+### Querying SurrealDB Using HTTP
+
+A SurrealDB server instance also maintains a set of HTTP endpoints. With these, a wide range of applications can query the database without needing a separate server-side application.
+
+This section provides some simple demonstrations of SurrealDB's HTTP endpoints using cURL from the command line. For legibility, the following two examples pipe the cURL output through the `jq` tool to pretty print the JSON.
+
+1. Just like starting up the SurrealDB CLI, it's best to indicate the namespace and database upfront with HTTP requests. For this, create a file with the header contents for your requests, which makes these easier to input:
+
+ ```command
+ cat > surreal_header_file < surreal_query_file <tagged->tags.value AS tags, date FROM article ORDER BY date;
+ EOF
+ ```
+
+1. Now run the cURL request to fetch the modeled blog post data:
+
+ ```command
+ curl -X POST -H "@surreal_header_file" --user "root:exampleRootPass" --data-binary "@surreal_query_file" http://localhost:8000/sql | jq
+ ```
+
+ ```output
+ [
+ {
+ "time": "828.056µs",
+ "status": "OK",
+ "result": [
+ {
+ "body": "This is the first post.",
+ "date": "2023-01-01T12:01:01Z",
+ "id": "article:first",
+ "tags": [
+ "post",
+ "test",
+ "first"
+ ],
+ "title": "First Post"
+ },
+ {
+ "body": "You are reading the second post.",
+ "date": "2023-02-01T13:02:02Z",
+ "id": "article:second",
+ "tags": [
+ "post",
+ "test"
+ ],
+ "title": "Second Post"
+ },
+ {
+ "body": "Here, the contents for the third post.",
+ "date": "2023-03-01T14:03:03Z",
+ "id": "article:third",
+ "tags": [
+ "test",
+ "last",
+ "post"
+ ],
+ "title": "Third Post"
+ }
+ ]
+ }
+ ]
+ ```
+
+## Conclusion
+
+You now have a foundation in SurrealDB and are ready to start making use of its powerful features as a database. To build on these foundations, you may want to start with the official SurrealDB documentation linked below.
+
+Afterwards, continue reading our other tutorials on SurrealDB. These tackle more advanced and focused use cases. In particular, everyone should follow our [Managing Security and Access Control for SurrealDB](/docs/guides/managing-security-and-access-for-surrealdb/) tutorial next, ensuring a secure and controlled database server.
+
+From there, take your pick based on your interests and needs:
+
+- [Deploying a SurrealDB Cluster](/docs/guides/deploy-surrealdb-cluster/)
+- [Building an Web Application on Top of SurrealDB](/docs/guides/surrealdb-for-web-applications)
+- [Modeling Data with SurrealDB’s Inter-document Relations](/docs/guides/surrealdb-interdocument-modeling)
\ No newline at end of file
diff --git a/docs/guides/databases/surrealdb/managing-security-and-access-for-surrealdb/index.md b/docs/guides/databases/surrealdb/managing-security-and-access-for-surrealdb/index.md
new file mode 100644
index 00000000000..2206d7b5697
--- /dev/null
+++ b/docs/guides/databases/surrealdb/managing-security-and-access-for-surrealdb/index.md
@@ -0,0 +1,479 @@
+---
+slug: managing-security-and-access-for-surrealdb
+title: "Managing Security and Access Control for SurrealDB"
+description: "Before moving to production, you need to secure your SurrealDB. Fortunately, SurrealDB features robust access control configurations and API. See how to lock-down your SurrealDB server and set up user access in this tutorial."
+authors: ['Nathaniel Stickman']
+contributors: ['Nathaniel Stickman']
+published: 2024-05-01
+keywords: ['surrealdb tutorial','surrealdb client','surrealdb authentication']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[SurrealDB Documentation](https://surrealdb.com/docs)'
+---
+
+SurrealDB provides a new approach to relational databases, with all-in-one database, API, and access control. Once a SurrealDB instance is up-and-running, you likely want to secure it. Learn about SurrealDB's security features and best practices in this tutorial. See how to secure your instance with limited user logins and discover how SurrealDB can implement a custom user-management solution for your applications.
+
+## How to Secure SurrealDB
+
+Learning how to run SurrealDB securely should be the first step after provisioning and getting familiar with your instance. The SurrealDB server is typically run with a root user during this initial provisioning and experimenting stage. However, to control access, you should create limited users. Only include a root user in rare circumstances.
+
+Continue reading to find out how to regulate root access and work with limited users on your SurrealDB instance.
+
+### Setting Up SurrealDB
+
+For this tutorial, you need to have installed SurrealDB on your system and placed the SurrealDB binary in your shell path. To do so, follow along with the instructions in our [Getting Started with SurrealDB](/docs/guides/getting-started-with-surrealdb/) guide.
+
+The tutorial assumes you have followed that guide up through the *How to Install SurrealDB* section, with SurrealDB installed and accessible via the `surreal` command.
+
+Additionally, the examples throughout this tutorial assume you have set up the database with the sample records provided in the guide linked above. For convenience, a single set of SurrealQL commands that create these sample records are provided in the section below.
+
+### Running SurrealDB Server Securely
+
+The guide linked above starts the SurrealDB server with a root username and password. However, root credentials input in this way may be visible in plain text, presenting a potential security risk.
+
+SurrealDB is only meant to be run in this way for initial setup purposes. During that period, limit access to `localhost`, where credentials are more secure.
+
+Once your instance's basic needs are configured, you should run the SurrealDB server without a root user at all. To disable root access, run the `surreal start` command to start up your SurrealDB server, but leave off the `--user` and `--pass` options.
+
+In this case, data is accessed using limited users created within particular namespaces and databases. Such users provide access to the server while keeping each set of credentials limited to one namespace or database.
+
+The example that follows sets up a limited user on your SurrealDB server. This example names the user `exampleUser` and relegates their access to the database level, specifically within `exampleDb`.
+
+1. Start up the server just as shown in the [Getting Started](/docs/guides/getting-started-with-surrealdb/#running-the-surrealdb-server) guide linked above, using a root user and password. This example initially limits the server to `localhost` (`127.0.0.1`), shutting off external access. The example also stores the database as a file rather than in memory.
+
+ ```command {title="Terminal #1"}
+ surreal start --bind 127.0.0.1:8000 --user root --pass exampleRootPass file:///home/example-user/.surrealdb/exampleDb.db
+ ```
+
+1. Open the SurrealDB CLI for entering SurrealQL commands. This example command also starts the CLI session out in a particular namespace and database.
+
+ ```command {title="Terminal #2"}
+ surreal sql --conn http://localhost:8000 --user root --pass exampleRootPass --ns exampleNs --db exampleDb
+ ```
+
+ {{< note >}}
+ Technically you do not need to access the CLI, you could instead send commands as HTTP requests. However, the CLI provides a convenient method for entering commands during this stage of server provisioning.
+ {{< /note >}}
+
+1. Enter any SurrealQL commands for initially populating the database or setting up schemas. For the examples in this tutorial, this means running the commands below to set up the sample data mentioned in the section above:
+
+ ```command {title="Terminal #2"}
+ INSERT INTO tags [
+ { id: "first", value: "first" },
+ { id: "last", value: "last" },
+ { id: "post", value: "post" },
+ { id: "test", value: "test" }
+ ];
+
+ INSERT INTO article [
+ { id: "first", date: "2023-01-01T12:01:01Z", title: "First Post", body: "This is the first post." },
+ { id: "second", date: "2023-02-01T13:02:02Z", title: "Second Post", body: "You are reading the second post." },
+ { id: "third", date: "2023-03-01T14:03:03Z", title: "Third Post", body: "Here, the contents for the third post." }
+ ];
+
+ RELATE article:first->tagged->tags:post;
+ RELATE article:first->tagged->tags:first;
+ RELATE article:first->tagged->tags:test;
+
+ RELATE article:second->tagged->tags:post;
+ RELATE article:second->tagged->tags:test;
+
+ RELATE article:third->tagged->tags:post;
+ RELATE article:third->tagged->tags:last;
+ RELATE article:third->tagged->tags:test;
+ ```
+
+1. Define a new SurrealDB limited user. The command below provides a basic example for a database-level user. Learn more about this command's usage in the section on [Creating a New SurrealDB User](/docs/guides/managing-security-and-access-for-surrealdb/#creating-a-new-surrealdb-user) further on.
+
+ ```command {title="Terminal #2"}
+ DEFINE LOGIN exampleUser ON DATABASE PASSWORD 'examplePass';
+ ```
+
+1. Use the Ctrl + C key combination to shut down both the SurrealDB CLI and server.
+
+1. Start up a new SurrealDB server using the same database source, but without defining a root user or root user password. This command does not bind the server to `localhost`. Instead, it opens access to any address matching this system, which readies the server for external access.
+
+ ```command {title="Terminal #1"}
+ surreal start --bind 0.0.0.0:8000 file:///home/example-user/.surrealdb/exampleDb.db
+ ```
+
+ You can now use the limited user to access the SurrealDB server's APIs. Learn more about doing so in the section on [Accessing SurrealDB as a Limited User](/docs/guides/managing-security-and-access-for-surrealdb/#accessing-surrealdb-as-a-limited-user) further on.
+
+1. To get started, here is a simple example that fetches information about the database using the SurrealDB server's `/sql` HTTP API endpoint.
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" -H "NS: exampleNs" -H "DB: exampleDb" --user "exampleUser:examplePass" --data "INFO FOR DB;" http://localhost:8000/sql | jq
+ ```
+
+ ```output
+ [
+ {
+ "time": "239.691µs",
+ "status": "OK",
+ "result": {
+ "dl": {
+ "exampleUser": "DEFINE LOGIN exampleUser ON DATABASE PASSHASH '$argon2ExamplePassHash'"
+ },
+ "dt": {},
+ "fc": {},
+ "pa": {},
+ "sc": {},
+ "tb": {
+ "article": "DEFINE TABLE article SCHEMALESS PERMISSIONS NONE",
+ "tagged": "DEFINE TABLE tagged SCHEMALESS PERMISSIONS NONE",
+ "tags": "DEFINE TABLE tags SCHEMALESS PERMISSIONS NONE"
+ }
+ }
+ }
+ ]
+ ```
+
+1. To verify the limits to the user's access, try a similar command to fetch information about the namespace. Recall that this is a database-level user. As such, its access does not extend to namespace information.
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" -H "NS: exampleNs" -H "DB: exampleDb" --user "exampleUser:examplePass" --data "INFO FOR NS;" http://localhost:8000/sql | jq
+ ```
+
+ ```output
+ [
+ {
+ "time": "43.29µs",
+ "status": "ERR",
+ "detail": "You don't have permission to perform this query type"
+ }
+ ]
+ ```
+
+## How to Securely Access SurrealDB with a Limited User
+
+The above shows how to secure a SurrealDB server by disabling the root user after initial setup. Part of that initial setup includes creating limited users, which are used to work within your SurrealDB databases.
+
+Such limited users are intended to provide access to your database server after initial setup. With access restricted to specific namespaces or databases, these users offer your server a higher level of security.
+
+This section goes more in-depth with limited users in SurrealDB, covering creation and use of both namespace- and database-level users.
+
+### Creating a New SurrealDB User
+
+Each SurrealDB login is created with a `DEFINE LOGIN` statement. Upon creation, each is associated with either the current namespace or the current database. That login has full access within the associated namespace/database, but cannot access information outside of that boundary.
+
+SurrealQL's `DEFINE` statement creates new user logins. Each such new login exists either at the namespace level or the database level, relative to the current namespace or database.
+
+For example, here are two ways of defining a new `exampleUser` login on your SurrealDB server:
+
+- The first example uses `ON NAMESPACE` to grant this user namespace-level permissions:
+
+ ```command
+ DEFINE LOGIN exampleUser ON NAMESPACE PASSWORD 'examplePass';
+ ```
+
+- Alternatively, you could use `ON DATABASE` to only grant the user database-level permissions:
+
+ ```command
+ DEFINE LOGIN exampleUser ON DATABASE PASSWORD 'examplePass';
+ ```
+
+### Accessing SurrealDB as a Limited User
+
+Once you have created a limited user login, you can access SurrealDB with that user's credentials. This allows you to run the SurrealDB server without a root user. Since limited users can only access their designated namespaces and databases, the server is more secure.
+
+At this point, access relies on SurrealDB's APIs. The SurrealDB CLI limits its SQL interface to the root user. Thus, limited user logins need to access the SurrealDB server via HTTP requests or through dedicated SurrealDB libraries.
+
+How to do this varies widely depending on the tools and frameworks in use. However, a straightforward way to explore your new limited users' capabilities is with the cURL command-line tool.
+
+The section above on securing the SurrealDB instance had a simple example. The following example uses a more advanced query.
+
+1. Save a file with header information. While not mandatory, this avoids having to specify all of these headers in each cURL request.
+
+ ```command {title="Terminal #2"}
+ cat > surreal_header_file <tagged->tags.value AS tags, date FROM article ORDER BY date;' http://localhost:8000/sql | jq
+ ```
+
+ ```output
+ [
+ {
+ "time": "1.054678ms",
+ "status": "OK",
+ "result": [
+ {
+ "date": "2023-01-01T12:01:01Z",
+ "tags": [
+ "test",
+ "post",
+ "first"
+ ],
+ "title": "First Post"
+ },
+ {
+ "date": "2023-02-01T13:02:02Z",
+ "tags": [
+ "test",
+ "post"
+ ],
+ "title": "Second Post"
+ },
+ {
+ "date": "2023-03-01T14:03:03Z",
+ "tags": [
+ "test",
+ "last",
+ "post"
+ ],
+ "title": "Third Post"
+ }
+ ]
+ }
+ ]
+ ```
+
+## How to Manage Scoped User Accounts and Access in SurrealDB
+
+The limited users described above provide your SurrealDB instance with logins to manage your database without root access. Each such user login keeps user access restricted, while also giving these users credentials to interact with and manage the database.
+
+Beyond this, SurrealDB also includes a set of features for scoped user accounts. These accounts primarily provide a set of access-control features to facilitate external access to the database. They are ideal for giving access to external applications, or even for full user management of your web application.
+
+The rest of this tutorial is dedicated to covering scopes and scoped users in SurrealDB. Follow along for an overview that you can use to implement more fine-grained access control. You can use the information here to start building a full user management setup in SurrealDB.
+
+### Creating a Scope in SurrealDB
+
+Scoped user accounts center on the use of SurrealDB's scopes. A *scope* is a special construct specifically designed to help your SurrealDB server function as a web database.
+
+You can use a scope to set up authentication endpoints and to build access rules for fine-grained database access control.
+
+To get started with user accounts, you need to define a scope for those accounts. The steps that follow walk through an example of creating a `userAccount` scope. The example also leverages the sample data set up earlier in the tutorial to showcase scope-based access control.
+
+1. If still running, use the Ctrl + C key combination to shut down the SurrealDB server.
+
+1. Restart the server as the root user:
+
+ ```command {title="Terminal #1"}
+ surreal start --bind 127.0.0.1:8000 --user root --pass exampleRootPass file:///home/example-user/.surrealdb/exampleDb.db
+ ```
+
+1. Log in to the SurrealDB CLI as the root user:
+
+ ```command {title="Terminal #2"}
+ surreal sql --conn http://localhost:8000 --user root --pass exampleRootPass --ns exampleNs --db exampleDb
+ ```
+
+1. Define a new `userAccount` scope. Typically, a scope definition includes three parts in addition to the definition itself:
+
+ - `SESSION` defines how long a user authenticated in this scope remains authenticated. This example uses three days (`3d`). Alternatively, you might choose a number of hours, as in `24h`.
+
+ - `SIGNUP` defines a `/signup` endpoint and a query to execute for it. The example here represents a fairly standard approach. Upon signup attempt, SurrealDB attempts the create the new user account using a `$username` variable and a `$pass` variable.
+
+ - `SIGNIN` defines a `/signin` endpoint and a query to execute for it. Like above, this example shows something rather standard. Taking `$username` and `$pass` variables, SurrealDB attempts to authenticate the user.
+
+ Both the `/signup` and `/signin` endpoints created here respond to successful requests with a JavaScript Web Token (JWT), which you can use further on.
+
+ ```command {title="Terminal #2"}
+ DEFINE SCOPE userAccount SESSION 3d
+ SIGNUP (CREATE user SET username = $username, pass = crypto::argon2::generate($pass))
+ SIGNIN (SELECT * FROM user WHERE username = $username AND crypto::argon2::compare(pass, $pass));
+ ```
+
+1. Leverage information about the current user's scope to define fine-grained access to tables and records. Below is a simple example that requires external users to be in the `userAccount` scope to `SELECT` records from the `article` table. It also prohibits access to the `CREATE`, `UPDATE`, and `DELETE` type statements.
+
+ ```command {title="Terminal #2"}
+ DEFINE TABLE article SCHEMALESS
+ PERMISSIONS
+ FOR select WHERE $scope = "userAccount"
+ FOR create, update, delete NONE;
+ ```
+
+ {{< note >}}
+ Using the `$scope` variable is a good start, but more advanced scenarios probably want more information about the current authenticated user. For that, take advantage of the `$session` and `$auth` variables, particularly the `$auth.id` variable. Associate new records with the `$auth.id` of the current user, and you can subsequently provide users with matching IDs access to those records.
+
+ See more of what these variables look like by using the `SELECT` statement on them when authenticated as a scoped user, for example:
+
+ ```command
+ SELECT * FROM $auth;
+ ```
+ {{< / note >}}
+
+1. When done, use the Ctrl + C key combination to shut down both the SurrealDB CLI and server.
+
+### Creating a Scoped User Account
+
+With a scope for user accounts set up, you can now start adding users to that scope. This is done via the `/signup` endpoint defined during the scope creation. The endpoint takes a `POST` request, with data indicating:
+
+- `NC` for the namespace
+
+- `DB` for the database
+
+- `SC` for the scope
+
+Additionally, this example also requires the request data to include `username` and `pass` parameters. These were defined in the `SIGNUP` definition, and thus have to be present in the request for a successful `SIGNUP` query.
+
+1. Start the SurrealDB server as the limited user:
+
+ ```command {title="Terminal #1"}
+ surreal start --bind 0.0.0.0:8000 file:///home/example-user/.surrealdb/exampleDb.db
+ ```
+
+1. Here is an example sign up for an `exampleAccount` user. The server URL and other parts of this example assume you are running your SurrealDB server as shown throughout the rest of this tutorial.
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" --data "{ NS: 'exampleNs', DB: 'exampleDb', SC: 'userAccount', username: 'exampleAccount', pass: 'exampleAccountPass' }" http://localhost:8000/signup | jq
+ ```
+
+ ```output
+ {
+ "code": 200,
+ "details": "Authentication succeeded",
+ "token": "SESSION_JWT"
+ }
+ ```
+
+ The response includes a JWT `token` that you can use to authenticate requests as the newly created user.
+
+1. Once a new user account is created, you can use the `/signin` endpoint to authenticate a new session with that user. The example here works almost identically to the one above, except that it uses `/signin` instead of `/signup`.
+
+ ```command {title="Terminal #2"}
+ curl -X POST -H "Accept: application/json" --data "{ NS: 'exampleNs', DB: 'exampleDb', SC: 'userAccount', username: 'exampleAccount', pass: 'exampleAccountPass' }" http://localhost:8000/signin | jq
+ ```
+
+ ```output
+ {
+ "code": 200,
+ "details": "Authentication succeeded",
+ "token": "SESSION_JWT"
+ }
+ ```
+
+### Accessing SurrealDB with a Scoped User
+
+The key to accessing SurrealDB resources with a scoped user is the JWT. Requests using a valid JWT for authentication allow you to query the SurrealDB database as the authenticated scoped user.
+
+To authenticate HTTP requests with the JWT, include an `Authorization: Bearer` header in the request. Using the `SESSION_JWT` example JWT from above, your requests for `exampleAccount` should have a header such as `Authorization: Bearer SESSION_JWT`.
+
+The steps that follow use a couple of the SurrealDB query endpoints with the authenticated `exampleAccount` user created above. Like other examples in this tutorial, these use cURL for the HTTP requests to make the examples accessible and clear.
+
+1. Save a file with header information. A similar file, using the same name, was created earlier in the tutorial. Again, doing so is optional, but having this file helps to prevent repetitiveness in the cURL commands to follow.
+
+ ```command {title="Terminal #2"}
+ cat > surreal_header_file < surreal.surql <Ctrl + C key combination.
+
+1. Open the default port (`8000`) for the SurrealDB server in your system's firewall:
+
+ {{< tabs >}}
+ {{< tab "Debian-based" >}}
+ On a Debian or Ubuntu system, refer to our [How to Configure a Firewall with UFW](/docs/guides/configure-firewall-with-ufw/) guide, and use commands like the following to open the port:
+
+ ```command
+ sudo ufw allow 8000/tcp
+ sudo ufw reload
+ ```
+ {{< /tab >}}
+ {{< tab "RHEL-based" >}}
+ On a CentOS or similar system (e.g. AlmaLinux and Rocky Linux), refer to our [Configure a Firewall with Firewalld](/docs/guides/introduction-to-firewalld-on-centos/) guide, and use commands like the following to open the port:
+
+ ```command
+ sudo firewall-cmd --zone=public --add-port=8000/tcp --permanent
+ sudo firewall-cmd --reload
+ ```{{< /tab >}}
+ {{< /tabs >}}
+
+1. Start the SurrealDB server using the same local file storage as before, but remove the root user and bind the server to any address:
+
+ ```command
+ surreal start --bind 0.0.0.0:8000 file:///home/example-user/.surrealdb/example.db
+ ```
+
+### Configuring the SurrealDB Schemas
+
+To prepare the SurrealDB database for the application, you should define the schemas that your application needs. The schemas your application needs vary widely depending on your application, and you need to plan out its features to model your databases effectively.
+
+Since this tutorial uses an example to-do list application to demonstrate, the steps here can provide a basic model. Follow along to see how you can use SurrealDB's `DEFINE` commands to craft a database for your own application's needs.
+
+For more on modeling databases in SurrealDB, take a look at the SurrealDB documentation linked at the end of this guide. For more advanced modeling ideas, check out our [Modeling Data with SurrealDB’s Inter-document Relations](/docs/guides/surrealdb-interdocument-modeling/) guide.
+
+1. Create a file to store your SurrealQL queries. This tutorial names the file `surreal.surql`. To execute SurrealDB queries over HTTP using cURL, it is easiest to work with queries stored in a file like this.
+
+ ```command
+ nano surreal.surql
+ ```
+
+ Each set of SurrealQL commands below should be added to this file. The last step in this section then shows how to execute all of the commands together using a single cURL command.
+
+1. Define a user account scope, called `account`, and give the scope `SIGNUP` and `SIGNIN` functions. This single command lays the basis for user access that the example application can use for full login functionality.
+
+ To learn more, take a look at the section on scoped user accounts in our [Managing Security and Access Control for SurrealDB](/docs/guides/managing-security-and-access-for-surrealdb/#how-to-manage-scoped-user-accounts-and-access-in-surrealdb) guide.
+
+ ```file {title="surreal.surql" lang="sql"}
+ DEFINE SCOPE account SESSION 24h
+ SIGNUP (CREATE user SET username = $username, pass = crypto::argon2::generate($pass))
+ SIGNIN (SELECT * FROM user WHERE username = $username AND crypto::argon2::compare(pass, $pass));
+ ```
+
+1. Define the `item` table for storing to-do items. The table is defined as `SCHEMAFULL`, meaning that it has a defined schema (provided in the next step) that data must adhere to.
+
+ The table also has a set of permissions that apply to scoped user accounts. A record can only be viewed (`select`) and modified (`update`) by a user with an ID matching the record's `owner` field. A record can be created by any user account, but no user can delete a record.
+
+ ```file {title="surreal.surql" lang="sql" linenostart="4"}
+ DEFINE TABLE item SCHEMAFULL
+ PERMISSIONS
+ FOR select, update WHERE owner = $token.ID
+ FOR create WHERE $scope = "account"
+ FOR delete NONE;
+ ```
+
+1. Define the fields for the `item` table. Doing so essentially sets up the table's schema.
+
+ Using `ASSERT` allows limits to be placed on a field's possible contents. `VALUE` allows the field to define default content. The `$value` variable corresponds to the input value for that field.
+
+ ```file {title="surreal.surql" lang="sql" linenostart="9"}
+ DEFINE FIELD description ON TABLE item TYPE string
+ ASSERT $value != NONE;
+ DEFINE FIELD completed ON TABLE item TYPE bool
+ VALUE $value OR false;
+ DEFINE FIELD owner ON TABLE item TYPE string
+ VALUE $value OR $token.ID;
+ DEFINE FIELD date ON TABLE item TYPE datetime
+ VALUE time::now();
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Execute these SurrealQL commands using cURL and the limited user login created in the previous section. The example command here uses the credentials shown above:
+
+ ```command
+ curl -X POST -H "Accept: application/json" -H "NS: application" -H "DB: todo" --user "exampleUser:examplePass" --data-binary "@surreal.surql" http://localhost:8000/sql | jq
+ ```
+
+## How to Build the Serverless Application
+
+The SurrealDB database is now prepared to act as the backend for your application. The SurrealDB server exposes a set of HTTP APIs. Your frontend application can then leverage these APIs. In many cases, this eliminates the need for a separate backend application.
+
+Moreover, the schema setups in the previous section allow you to work with the SurrealDB endpoints more easily. By managing the schemas' default values and restrictions, you can implement logic that distinguishes API access.
+
+All of this makes SurrealDB an excellent backend for Jamstack architectures, which is what the rest of this guide uses. You can learn more about Jamstack through our guide [Getting Started with the Jamstack](/docs/guides/what-is-jamstack/).
+
+Specifically, the next steps in this tutorial help you set up a basic frontend application using the [Gatsby](https://www.gatsbyjs.com/) framework. Gatsby uses React to generate static sites, and thus makes a good base for a Jamstack application. Learn more about Gatsby in our guide [Generating Static Sites with Gatsby](/docs/guides/generating-static-sites-with-gatsby/).
+
+### Setting Up the Prerequisites
+
+Before developing the code for the Gatsby frontend, you need to install some prerequisites. Additionally, this tutorial generates the new Gatsby project from a base template to streamline the development.
+
+1. First, follow along with our [Installing and Using NVM](/docs/guides/how-to-install-use-node-version-manager-nvm/#install-nvm) guide to install the Node Version Manager (NVM).
+
+1. Then run the commands below to install and start using the current LTS release of Node.js:
+
+ ```command
+ nvm install --lts
+ nvm use --lts
+ ```
+
+1. Install the Gatsby command-line tool as a global NPM package:
+
+ ```command
+ npm install -g gatsby-cli
+ ```
+
+1. Generate the new Gatsby project from the default starter template, then change into the project directory. The command here creates the new project as `surreal-example-app` in the current user's home directory:
+
+ ```command
+ cd ~/
+ gatsby new surreal-example-app https://github.com/gatsbyjs/gatsby-starter-default
+ cd surreal-example-app/
+ ```
+
+1. Install the SurrealDB JavaScript library to the project. While the project could interact with the SurrealDB server via HTTP, the SurrealDB library provides a much more convenient interface.
+
+ ```command
+ npm install surrealdb.js --save
+ ```
+
+1. Customize the project's metadata as you see fit. The metadata for the Gatsby application is stored in the `gatsby-config.js` file.
+
+ ```command
+ nano gatsby-config.js
+ ```
+
+1. Here is an example of the kinds of changes you might make:
+
+ ```file {title="gatsby-config.js" lang="js" linenostart="11"}
+ siteMetadata: {
+ title: `Example App for SurrealDB`,
+ description: `An example serverles web application to demonstrate SurrealDB.`,
+ author: `Example User`,
+ siteUrl: `https://example-user.example.com`,
+ },
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Open the default port to be used for the Gatsby application in your system's firewall. By default, Gatsby's development server runs on port `8000`. However, since that port is used by the SurrealDB server, this tutorial adopts port `8080` for its Gatsby examples.
+
+ {{< tabs >}}
+ {{< tab "Debian-based" >}}
+ On a Debian or Ubuntu system, use commands like the following to open the port:
+
+ ```command
+ sudo ufw allow 8080/tcp
+ sudo ufw reload
+ ```
+ {{< /tab >}}
+ {{< tab "RHEL-based" >}}
+ On a CentOS Stream or similar system (e.g. AlmaLinux or Rocky Linux), use commands like the following to open the port:
+
+ ```command
+ sudo firewall-cmd --zone=public --add-port=8080/tcp --permanent
+ sudo firewall-cmd --reload
+ ```
+ {{< /tab >}}
+ {{< /tabs >}}
+
+1. Now test the default Gatsby application by running the Gatsby development server:
+
+ ```command
+ gatsby develop -H 0.0.0.0 -p 8080
+ ```
+
+1. Open a web browser and navigate to port `8080` of your system's public IP address to see the Gatsby application.
+
+1. When done, stop the development server with the Ctrl + C key combination.
+
+### Building the Application
+
+With Gatsby installed and the base project set up, you can now start developing the example application itself. This mostly involves editing key parts of the default Gatsby application and adding a few components and services.
+
+Throughout the example code that follows, comments are provided to help navigate what each part of the application does.
+
+#### Defining the Display Components
+
+1. Open the `src/pages/index.js` file:
+
+ ```command
+ nano src/pages/index.js
+ ```
+
+1. Delete the file's existing contents and replace it with the following:
+
+ ```file {title="src/pages/index.js" lang="js"}
+ // Import modules for building and styling the page
+ import React from 'react';
+ import Layout from '../components/layout';
+ import * as styles from '../components/index.module.css';
+
+ // Import the custom components for rendering the page
+ import Login from '../components/login';
+ import Home from '../components/home';
+
+ // Import a function from the custom authentication service
+ import { isSignedIn } from '../services/auth';
+
+ // Render the page
+ const IndexPage = () => {
+ return (
+
+ { isSignedIn() ? ( ) : ( ) }
+
+ )
+ }
+
+ export default IndexPage
+ ```
+
+ This defines the landing page for the Gatsby application. In this example, the page simply wraps and displays one or two components, `Home` or `Login`, depending on the login status.
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Now open the `src/components/layout.js` file:
+
+ ```command
+ nano src/components/layout.js
+ ```
+
+1. Make two changes to the file:
+
+ - First, add an `import` statement for a `NavBar` component. Place this line alongside the other `import` statements near the beginning of the file.
+
+ ```file {title="src/components/layout.js" lang="js" linenostart="10"}
+ import NavBar from './nav-bar';
+ ```
+
+ - Modify the `return` section to remove the `Header` component and add the `NavBar` component just above the `main` element:
+
+ ```file {title="src/components/layout.js" lang="js" linenostart="25"}
+ return (
+ <>
+
+
+
+ {children}
+
+
+
+ >
+ )
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Create a `src/components/nav-bar.js` file:
+
+ ```command
+ nano src/components/nav-bar.js
+ ```
+
+1. Give the file the contents below to create the `NavBar` component, which houses the **Logout** option for the application:
+
+ ```file {title="src/components/nav-bar.js" lang="js"}
+ // Import React for rendering
+ import React from 'react';
+
+ // Import functions from the custom authentication service
+ import { isSignedIn, handleSignOut } from '../services/auth';
+
+ // Render the component
+ export default function NavBar() {
+ return (
+
+
+
+
+
+ )
+ }
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Create a `src/components/login.js` file:
+
+ ```command
+ nano src/components/login.js
+ ```
+
+1. Give it the contents here to create a `Login` component used to render the page for user logins:
+
+ ```file {title="src/components/login.js" lang="js"}
+ // Import modules for rendering the component and for React's state and
+ // effect features
+ import React, { useState, useEffect } from 'react';
+ import * as styles from '../components/index.module.css';
+
+ // Import functions from the custom authentication service
+ import { handleSignUp, handleSignIn } from '../services/auth';
+
+ const Login = () => {
+ // Initialize the components state variables, used for the login form
+ // submission
+ const [usernameInput, setUsernameInput] = useState('');
+ const [passwordInput, setPasswordInput] = useState('');
+ const [isSignUp, setIsSignUp] = useState(false);
+
+ // Define a function for handling user login
+ const handleSignInSubmit = (e) => {
+ e.preventDefault();
+
+ if (!isSignUp) {
+ handleSignIn(usernameInput, passwordInput)
+ .then( () => {
+ window.location.reload();
+ });
+ } else {
+ handleSignUp(usernameInput, passwordInput)
+ .then( () => {
+ window.location.reload();
+ });
+ }
+ }
+
+ // Render the component
+ return (
+
+
+ Example App for SurrealDB
+
+
+ A serverless application to demonstrate using SurrealDB as a full backend.
+
+
+
+ Login
+
+
+
+
+ )
+ }
+
+ export default Login
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Create a `src/components/home.js` file:
+
+ ```command
+ nano src/components/home.js
+ ```
+
+1. Give it the contents below to create the `Home` component, rendering the to-do list for a logged-in user:
+
+ ```file {title="src/components/home.js" lang="js"}
+ // Import modules for rendering the component and for React's state and
+ // effect features
+ import React, { useState, useEffect } from 'react';
+ import * as styles from '../components/index.module.css';
+
+ // Import functions for working with to-do items from the custom service
+ import { fetchItems, submitItem, completeItem } from '../services/todo';
+
+ const Home = () => {
+ // Initialize the state variables:
+ // - For the loaded list of items
+ const [itemsList, setItemsList] = useState([]);
+ const [itemsLoaded, setItemsLoaded] = useState(false);
+ // - For the list of items marked for completion
+ const [completionList, setCompletionList] = useState([]);
+ // - For any new item being entered
+ const [newTodoItem, setNewTodoItem] = useState('');
+
+ // Define an effect for loading the list of items
+ useEffect(() => {
+ const fetchData = async () => {
+ await fetchItems()
+ .then(result => {
+ setItemsList(result[0].result);
+ setItemsLoaded(true);
+ });
+ }
+
+ fetchData();
+ }, []);
+
+ // Define a function to keep the list of items marked for completion
+ // updated
+ const processCompletions = (e) => {
+ setCompletionList(e.target.checked
+ ? completionList.concat(e.target.id)
+ : completionList.filter(item => item !== e.target.id));
+ }
+
+ // Define a function for submitting items for completion
+ const submitCompletions = async () => {
+ for (const item of completionList) {
+ const result = await completeItem(item);
+ }
+
+ window.location.reload();
+ }
+
+ // Define a function for submitting a new item
+ const addNewTodoItem = async (e) => {
+ e.preventDefault();
+
+ if (newTodoItem !== '' && newTodoItem.length > 0) {
+ const result = await submitItem(newTodoItem);
+
+ if (result) {
+ window.location.reload();
+ } else {
+ alert('An error occurred');
+ }
+ }
+ }
+
+ // Render the component
+ return (
+
+ )
+ }
+
+ export default Home
+ ```
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+#### Defining the Service Interfaces
+
+1. Create a `src/services` directory to store files that provide interfaces to external services. In this case, the files expose functions that leverage the SurrealDB JavaScript library to authenticate sessions and fetch and submit data.
+
+ ```command
+ mkdir src/services
+ ```
+
+1. Create a `src/services/auth.js` file:
+
+ ```command
+ nano src/services/auth.js
+ ```
+
+1. Give it the contents below to create a set of service functions for authenticating SurrealDB user sessions. Make sure to replace `` with your actual SurrealDB server's IP address or domain name, if configured.
+
+ ```file {title="src/services/auth.js" lang="js" hl_lines="6"}
+ // Import the SurrealDB library
+ import Surreal from 'surrealdb.js';
+
+ // Define the URL for your SurrealDB service; /rpc is the server's endpoint
+ // for WebSocke/library connections
+ const SURREAL_URL = 'http://:8000/rpc';
+
+ // Define convenience functions for fetching and setting a userToken within
+ // the browser's local storage, giving session persistence
+ const getCurrentToken = () => {
+ if (typeof window !== 'undefined' && window.localStorage.getItem('userToken')) {
+ return window.localStorage.getItem('userToken');
+ } else {
+ return null;
+ }
+ }
+
+ const setCurrentToken = (userToken) => {
+ window.localStorage.setItem('userToken', userToken);
+ }
+
+ // Define a function for checking whether a user is logged in by referring
+ // to the current userToken stored in the browser
+ export const isSignedIn = () => {
+ const currentToken = getCurrentToken();
+ return (currentToken !== null && currentToken !== '');
+ }
+
+ // Define a function for generating an authenticated database session; used
+ // mostly by other services to eliminate repetitious log-in code
+ export const generateAuthenticatedConnection = async () => {
+ if (isSignedIn()) {
+ const db = new Surreal(SURREAL_URL);
+ await db.authenticate(getCurrentToken());
+ await db.use({ ns: 'application', db: 'todo' });
+
+ return db
+ } else {
+ return null
+ }
+ }
+
+ // Define a set of functions for handling logins; the handleSignUp and
+ // handleSignIn functions call the postSignIn function with a flag
+ // indicating the type of authentication needed
+ const postSignIn = async (username, password, isSignUp) => {
+ if (username && password && username !== '' && password !== '') {
+ const db = new Surreal(SURREAL_URL);
+
+ const signInData = {
+ NS: 'application',
+ DB: 'todo',
+ SC: 'account',
+ username: username,
+ pass: password
+ }
+
+ if (isSignUp) {
+ await db.signup(signInData)
+ .then(response => {
+ setCurrentToken(response);
+ db.close();
+ });
+ } else {
+ await db.signin(signInData)
+ .then(response => {
+ setCurrentToken(response);
+ db.close();
+ });
+ }
+ } else {
+ handleSignOut();
+ }
+ }
+
+ export const handleSignUp = async (username, password) => {
+ await postSignIn(username, password, true);
+ }
+
+ export const handleSignIn = async (username, password) => {
+ await postSignIn(username, password, false);
+ }
+
+ // Define a function to log out by removing the userToken from browser storage
+ export const handleSignOut = () => {
+ setCurrentToken('');
+ }
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Create a `src/services/todo.js` file:
+
+ ```command
+ nano src/services/todo.js
+ ```
+
+1. Give it the following contents to define a set of service functions for the application to fetch and submit changes to to-do items:
+
+ ```file {title="src/services/todo.js" lang="js"}
+ // Import the SurrealDB library
+ import Surreal from "surrealdb.js";
+
+ // Import the authentication service for generating an database connection
+ import { generateAuthenticatedConnection } from "../services/auth";
+
+ // Define a function for fetching items that are not marked completed;
+ // already the database handles limiting the results to those matching
+ // the current user's ID
+ export const fetchItems = async () => {
+ const db = await generateAuthenticatedConnection();
+
+ if (db) {
+ const query = db.query('SELECT id, description, date FROM item WHERE completed = false ORDER BY date;');
+ return query
+ } else {
+ return []
+ }
+ }
+
+ // Define a function for submitting a new item; new items only need a
+ // description, as the schema generates the rest of the data
+ export const submitItem = async (itemDescription) => {
+ const db = await generateAuthenticatedConnection();
+
+ if (db) {
+ const query = await db.create('item', { description: itemDescription });
+ return query
+ } else {
+ return null
+ }
+ }
+
+ // Define a function for marking an item completed
+ export const completeItem = async (itemId) => {
+ const db = await generateAuthenticatedConnection();
+
+ if (db) {
+ const query = await db.merge(itemId, { completed: true });
+ return query
+ } else {
+ return null
+ }
+ }
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+## How to Run the Serverless Application with a SurrealDB Backend
+
+Now it's time to see the whole setup in action. With the server configured, running, and the application built, you can start the Gatsby development server to see the example application.
+
+Follow along with the steps here to ensure that everything is in place and start using the application.
+
+Further on you can find a suggestion for how to prepare the application for production deployment. It specifically focuses on using object storage for an efficient Jamstack deployment.
+
+1. Make sure the SurrealDB server is running on the expected address and port and using the database file from before:
+
+ ```command
+ surreal start --bind 0.0.0.0:8000 file:///home/example-user/.surrealdb/example.db
+ ```
+
+1. Start the Gatsby development server again using the same address and port as when you initially tested the Gatsby project further above:
+
+ ```command
+ gatsby develop -H 0.0.0.0 -p 8080
+ ```
+
+1. Open a web browser and navigate to port `8080` on your system's public IP address.
+
+You should now see the login page for the example to-do list application:
+
+
+
+Use the **Sign Up** option to create a user account. From there, you should be logged in and viewing the to-do list page. You can add items using the form, which should produce a list of those items, like so:
+
+
+
+### Deploying the Application to Object Storage
+
+There are several options for deploying a Jamstack application like the one shown throughout this tutorial. With the SurrealDB server providing an accessible backend and a static site for the frontend, you have a lot of versatility for hosting.
+
+See the deployment section of the [Gatsby guide](/docs/guides/generating-static-sites-with-gatsby/#deploy-a-gatsby-static-site) linked above for more ideas.
+
+For a more traditional static-site deployment process, read [Set up a Web Server and Host a Website on Linode](/docs/guides/set-up-web-server-host-website/). Additionally, our guide on how to [Deploy a Static Site using Hugo and Object Storage](/docs/guides/host-static-site-object-storage/) showcases a modern and streamlined process using object storage.
+
+Object storage provides an efficient and powerful possibility for hosting the static frontends of Jamstack applications.
+
+The steps that follow outline a method for deploying the Gatsby application created in this tutorial to Linode Object Storage. These steps are stated broadly, so supplement them with the more specific instructions in the guides linked above as needed.
+
+1. Build the Gatsby application. To do so, execute the command here in the project's directory. Gatsby renders the application in a set of static files that can be served.
+
+ ```command
+ gatsby build
+ ```
+
+1. Install `s3cmd` and configure it for your Linode Object Storage credentials and settings. See how to do that in our guide [Using S3cmd with Object Storage](/docs/products/storage/object-storage/guides/s3cmd/).
+
+1. Use `s3cmd` to create a new bucket, initialize the bucket as a website, and sync the application's static files to the bucket:
+
+ ```command
+ s3cmd mb s3://example-surreal-app
+ s3cmd ws-create --ws-index=index.html --ws-error=404.html s3://example-surreal-app
+ s3cmd --no-mime-magic --acl-public --delete-removed --delete-after sync public/ s3://example-surreal-app
+ ```
+
+1. At this point, you can access the application at the Linode Object Storage website URL, such as `example-surreal-app.website-[cluster-id].linodeobjects.com`.
+
+1. **Optional**: If using a custom domain name, create a `CNAME` domain record mapping the object storage bucket's URL to the same domain name as your SurrealDB server. Doing so can help prevent CORS-related difficulties. See how to do this in the *Next Steps* section of the [object storage deployment guide](/docs/guides/host-static-site-object-storage/#optional-next-steps).
+
+## Conclusion
+
+This tutorial provides the tools needed to start implementing SurrealDB as a backend for applications. Doing so can often save significant time and effort in creating and maintaining backend APIs.
+
+More importantly, leveraging SurrealDB's APIs can make applications more adaptable. Whether it's for a traditional frontend, or a modern architecture like Jamstack with a static site generator, SurrealDB can be a full backend resource. This provides a lot of flexibility.
+
+Be sure to look at our other guides on SurrealDB, linked earlier in this tutorial. Additionally, learn more about schemas and document relations with our guide on [Modeling Data with SurrealDB’s Inter-document Relations](/docs/guides/surrealdb-interdocument-modeling/).
\ No newline at end of file
diff --git a/docs/guides/databases/surrealdb/surrealdb-interdocument-modeling/index.md b/docs/guides/databases/surrealdb/surrealdb-interdocument-modeling/index.md
new file mode 100644
index 00000000000..6eae89d166f
--- /dev/null
+++ b/docs/guides/databases/surrealdb/surrealdb-interdocument-modeling/index.md
@@ -0,0 +1,552 @@
+---
+slug: surrealdb-interdocument-modeling
+title: "Modeling Data with SurrealDB’s Inter-document Relations"
+description: "One of SurrealDB's chief features is its multi-model approach. Using inter-document relations, you can model your data according to your needs, without having to design all of your models in advance. Find out more and see the examples is this tutorial."
+authors: ['Nathaniel Stickman']
+contributors: ['Nathaniel Stickman']
+published: 2024-05-01
+keywords: ['surrealdb tutorial','surrealdb examples','surrealdb client']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[SurrealDB Documentation](https://surrealdb.com/docs)'
+- '[SurrealDB: RELATE Statement](https://surrealdb.com/docs/surrealql/statements/relate)'
+---
+
+SurrealDB leverages a multi-model approach to data. You can use whatever models fit your needs when storing and retrieving data, without meticulously planning out models in advance. For that, SurrealDB makes use of inter-document relations, and implements a highly-efficient core for managing relations.
+
+Follow along with this tutorial to start making the most out of SurrealDB, and see how you can use its multi-model architecture. Learn about the concepts behind SurrealDB's inter-document relations, and walk through examples that put them into practice.
+
+## What Are Inter-document Relations in SurrealDB?
+
+Inter-document relations have existed as an integral part of document-centered NoSQL databases like MongoDB. Such databases can typically do without `JOIN` commands. Instead, relations between documents are formed by features like embedded documents and document references.
+
+SurrealDB itself has document logic at its core, from which it draws powerful possibilities for relating documents.
+
+SurrealDB expands on that core with a versatile multi-model approach. This allows SurrealDB to store data sequentially like SQL relational databases, for efficient and familiar table structures. At the same time, it also grants SurrealDB the interconnected structure of NoSQL graph databases for complex relations between records.
+
+## How Do SurrealDB Inter-document Relations Work?
+
+As the description above shows, there are numerous ways to work with inter-document relations in a multi-model database like SurrealDB. Part of SurrealDB's advantage comes in its freedom to store and retrieve data with different models.
+
+However, to demonstrate and help you get started working with SurrealDB's inter-document relations, this tutorial breaks these down into two broad categories:
+
+- **Document**: Uses document-database notation to navigate nested and related documents.
+
+- **Graph**: Uses the interconnections of graphs to relate documents.
+
+### Document Notation
+
+SurrealDB supports notations similar to other document databases, such as MongoDB, for accessing nested fields from documents. This includes dot notation (`.`) and array notation (`[]`).
+
+To demonstrate, try out this sample data set:
+
+```command
+INSERT INTO person [
+ {
+ id: "one",
+ name: "Person One"
+ },
+ {
+ id: "two",
+ name: "Person Two"
+ },
+ {
+ id: "three",
+ name: "Person Three"
+ },
+ {
+ id: "four",
+ name: "Person Four"
+ }
+];
+
+INSERT INTO department [
+ {
+ id: "first",
+ participants: [
+ {
+ ref: person:one,
+ role: role:doer
+ },
+ {
+ ref: person:three,
+ role: role:undoer
+ }
+ ]
+ },
+ {
+ id: "second",
+ participants: [
+ {
+ ref: person:two,
+ role: role:doer
+ },
+ {
+ ref: person:three,
+ role: role:undoer
+ },
+ {
+ ref: person:four,
+ role: role:redoer
+ }
+ ]
+ }
+];
+
+INSERT INTO role [
+ {
+ id: "doer",
+ description: "Does Stuff"
+ },
+ {
+ id: "undoer",
+ description: "Fixes Things"
+ },
+ {
+ id: "redoer",
+ description: "Repairs Fixes"
+ }
+];
+```
+
+Here, data can be fetched from nested fields using document notation to access deeper and deeper levels. Additionally, record IDs in SurrealDB act as direct references, so having these in the documents above eases relations.
+
+In this example, dot notation allows for grabbing the `description` from a `role` document based on the ID held in a completely separate document in the `participants` array:
+
+```command
+SELECT role.description AS role
+ FROM department:second.participants
+ WHERE ref = person:three;
+```
+
+```output
+{
+ role: 'Fixes Things'
+}
+```
+
+Array notation from there allows you to select a particular member of an array based on its index (starting at zero). In this example, the query uses the `person` ID in a specific member of the `participants` array:
+
+```command
+SELECT name FROM department:first.participants[1].ref;
+```
+
+```output
+{
+ name: 'Person Three'
+}
+```
+
+### Graph Relations
+
+SurrealDB can build graph edge relations using its `RELATE` statement. Such a statement allows you to create vertex -> edge -> vertex relations between documents. Afterward, similar arrow notation can be used to leverage the document relations in queries.
+
+Instead of vertex -> edge -> vertex, you may find it helpful to think of these relations as noun -> verb -> noun. This tends to be how SurrealDB's documentation names these relations, and this tutorial does the same.
+
+Each `RELATE` results in a new table (the edge or verb) that operates to relate documents in a given way.
+
+Try out this data set to start working with graph relations in SurrealDB.
+
+```command
+INSERT INTO person [
+ {
+ id: "one",
+ name: "Person One"
+ },
+ {
+ id: "two",
+ name: "Person Two"
+ },
+ {
+ id: "three",
+ name: "Person Three"
+ },
+ {
+ id: "four",
+ name: "Person Four"
+ }
+];
+
+INSERT INTO department [
+ {
+ id: "first",
+ },
+ {
+ id: "second",
+ }
+];
+
+INSERT INTO role [
+ {
+ id: "doer",
+ description: "Does Stuff"
+ },
+ {
+ id: "undoer",
+ description: "Fixes Things"
+ },
+ {
+ id: "redoer",
+ description: "Repairs Fixes"
+ }
+];
+
+RELATE person:one->participates->department:first SET role = role:doer;
+RELATE person:three->participates->department:first SET role = role:undoer;
+
+RELATE person:two->participates->department:second SET role = role:doer;
+RELATE person:three->participates->department:second SET role = role:undoer;
+RELATE person:four->participates->department:second SET role = role:redoer;
+```
+
+Leveraging the graph relations, queries can navigate from vertex to vertex by way of the edges. To do so, recall the arrow notation from the initial `RELATE` statements. These define the directions for graph flows.
+
+The example here starts with the `department` vertices (because `FROM department`). From there, it works through the `participates` edge, at the same time using a `WHERE` statement to limit by role. And finally from that it renders the `name` field from the corresponding `person` vertices.
+
+```command
+SELECT <-(participates WHERE role=role:doer)<-person.name AS name
+ FROM department
+```
+
+```output
+[
+ {
+ name: [
+ 'Person One'
+ ]
+ },
+ {
+ name: [
+ 'Person Two'
+ ]
+ }
+]
+```
+
+## How to Use SurrealDB’s Inter-document Relations
+
+You now have an overview and a start to exploring inter-document relations in SurrealDB. However, it can be helpful to see these features used in specific and more practical use cases.
+
+This section walks you through just that. While the data here may not distill all of the complexities of real-world data, it represents a relatable and practical use case. The examples here help provide a better foothold for navigating SurrealDB relations in all situations.
+
+### Setting Up the Prerequisites
+
+To get started, you need to have installed SurrealDB on your system and have placed the SurrealDB binary in your shell path. Follow along with our [Getting Started with SurrealDB](/docs/guides/getting-started-with-surrealdb/) guide to see how.
+
+This tutorial assumes that you have followed that guide up through the *How to Install SurrealDB* section, with SurrealDB installed and accessible via the `surreal` command.
+
+For the examples to follow, you only need to be running the SurrealDB server with local access. To make things even easier, just run the server with a root user. You can accomplish this with the following command:
+
+```command
+surreal start --bind 127.0.0.1:8000 --user root --pass exampleRootPass
+```
+
+By using a root user, you also have access to SurrealDB's command-line interface (CLI). This makes setting up data and exploring the effects of different queries significantly smoother.
+
+To start up the SurrealDB CLI, use the command below in a second terminal. This command assumes you have used the same parameters in starting your SurrealDB server as used in the example command above.
+
+```command
+surreal sql --host http://localhost:8000 --user root --pass exampleRootPass --ns exampleNs --db exampleDb --pretty
+```
+
+### Populating a Database
+
+Using the SurrealDB CLI, you can now start populating the database. The goal in populating this example database is to leverage SurrealDB's multi-model inter-document relations. To that end, the example data is good for demonstrating both document relations and graph relations.
+
+The use case for the examples here is a system for tracking college courses. Such a system needs to be able to catalog courses and list available professors and their departments. To simplify things, these examples do not venture into adding a student or scheduling data to the mix.
+
+#### Defining Schemas
+
+To begin, define each of the tables. SurrealDB is a document database at core, but one of its features from relational databases is its ability to define table schemas.
+
+Defining a table's schema is not required for the data used here. However, doing so follows a good practice to make your SurrealDB database more robust.
+
+The tables for courses and professors in this example are relatively flat, without nested fields to deal with. For that reason, those tables can benefit from SurrealDB's `SCHEMAFULL` designation. It provides strict schema enforcement, similar to traditional relational databases.
+
+The table listing departments needs a less strict schema, since the example here gives each department document an array of nested documents. Here, use SurrealDB's `SCHEMALESS` designation, which still lets you define a schema, though unenforced.
+
+So with `SCHEMALESS`, why define the table and fields at all? Because SurrealDB still enforces the permissions, assertions, and default values you add to schemaless tables.
+
+```command
+DEFINE TABLE course SCHEMAFULL;
+DEFINE FIELD name ON TABLE course TYPE string
+ ASSERT $value != NONE;
+DEFINE FIELD description ON TABLE course TYPE string
+ ASSERT $value != NONE;
+DEFINE FIELD hours ON TABLE course TYPE int
+ ASSERT $value != NONE && $value > 0;
+DEFINE FIELD capacity ON TABLE course TYPE int
+ ASSERT $value != NONE && $value > 0;
+
+DEFINE TABLE professor SCHEMAFULL;
+DEFINE FIELD name ON TABLE professor TYPE string
+ ASSERT $value != NONE;
+
+DEFINE TABLE department SCHEMALESS;
+DEFINE FIELD name ON TABLE department TYPE string
+ ASSERT $value != NONE;
+DEFINE FIELD courses ON TABLE department TYPE array;
+```
+
+#### Inserting Documents
+
+With the schemas defined, start adding in data. The four courses below add a good base to start from. Each course has an designated ID, some descriptive text, and a set of numerical variables.
+
+```command
+INSERT INTO course [
+ {
+ id: "bio103",
+ name: "Human Biology",
+ description: "Builds on basic biology to introduce a study of the human organism.",
+ hours: 4,
+ capacity: 25
+ },
+ {
+ id: "eng101",
+ name: "English Composition",
+ description: "Teaches skills in English composition.",
+ hours: 3,
+ capacity: 15
+ },
+ {
+ id: "his102",
+ name: "American History, 1900–Present",
+ description: "Covers American history from 1900 to the present.",
+ hours: 3,
+ capacity: 20
+ },
+ {
+ id: "mat101",
+ name: "College Algebra I",
+ description: "Instructs the first phase of college-level algebra.",
+ hours: 3,
+ capacity: 30
+ }
+];
+```
+
+Next, those courses need instructors, so insert some data to create them. In this example, the professors only need an ID and a name. Everything else can be handled with relations, at least as far as the simple use case here.
+
+```command
+INSERT INTO professor [
+ {
+ id: "otwo",
+ name: "Dr. One Two"
+ },
+ {
+ id: "tfour",
+ name: "Dr. Three Four"
+ },
+ {
+ id: "fsix",
+ name: "Dr. Five Six"
+ },
+ {
+ id: "seight",
+ name: "Dr. Seven Eight"
+ },
+ {
+ id: "nten",
+ name: "Dr. Nine Ten"
+ },
+ {
+ id: "etwelve",
+ name: "Dr. Eleven Twelve"
+ }
+];
+```
+
+In this example, a professor's availability to instruct a course depends on whether the professor is part of the proper department. Graph relations provide a good method to relate professors with departments. To associate courses with departments, leverage nested arrays.
+
+```command
+INSERT INTO department [
+ {
+ id: "bio",
+ name: "Biological Sciences",
+ courses: [
+ {
+ course: course:bio103,
+ enrollment: 20
+ }
+ ]
+ },
+ {
+ id: "eng",
+ name: "English",
+ courses: [
+ {
+ course: course:eng101,
+ enrollment: 10
+ }
+ ]
+ },
+ {
+ id: "his",
+ name: "History",
+ courses: [
+ {
+ course: course:his102,
+ enrollment: 15
+ }
+ ]
+ },
+ {
+ id: "mat",
+ name: "Mathematics",
+ courses: [
+ {
+ course: course:mat101,
+ enrollment: 25
+ }
+ ]
+ }
+];
+```
+
+Using arrays of objects for the course list leaves the department more adaptable. More courses can be added, including more of the same kind. More advanced data like scheduling can also be manipulated here.
+
+#### Putting in Graph Relations
+
+As a last step for preparing the data, add the graph relations between professors and departments. The example here uses the `teaches` verb for the relations.
+
+```command
+RELATE professor:otwo->teaches->department:mat;
+RELATE professor:tfour->teaches->department:his;
+RELATE professor:fsix->teaches->department:mat;
+RELATE professor:seight->teaches->department:eng;
+RELATE professor:nten->teaches->department:bio;
+RELATE professor:etwelve->teaches->department:eng;
+```
+
+#### Optional Advanced Features
+
+One advanced possibility opened up by the setup above is further associating each professor with available schedules, which you could do with the `SET` option. That would also work well with the more advanced option of adding schedules to department course listings.
+
+This tutorial does not cover this scenario in full. However, if you are interested, here is a brief snippet of what you might do if you wanted to incorporate a `schedule` table:
+
+```command
+INSERT INTO department [
+ {
+ id: "mat",
+ name: "Mathematics",
+ courses: [
+ {
+ course: course:mat101,
+ enrollment: 25,
+ schedule: mwf1000
+ },
+ {
+ course: course:mat101,
+ enrollment: 19,
+ schedule: tr1600
+ }
+ ]
+ }
+];
+
+RELATE professor:otwo->teaches->department:mat
+ SET availability = [
+ schedule:mwf0900,
+ schedule:tr1400,
+ schedule:tr1600
+ ];
+```
+
+### Querying on Inter-document Relations
+
+Having the sample data in place, you can start to work through some practical applications of SurrealDB's inter-document relations. The queries that follow demonstrate particular use cases, and each provides practical SurrealQL tools to work with.
+
+- **Fetching Mathematics professors and courses.** Most SurrealDB queries that seek to model the retrieved data make use of document notation. Here, dot notation gives the query access to the nested `course.name` field associated with each `courses` ID.
+
+ What is more useful here is how SurrealDB lets you use those `courses` IDs just as if they were the objects those IDs refer to.
+
+ Going beyond the document notation, the query uses the `teaches` edge graph to retrieve professor names associated with the Mathematics department.
+
+ ```command
+ SELECT <-teaches<-professor.name AS math_professors,
+ courses.course.name AS math_courses
+ FROM department:mat
+ ```
+
+ ```output
+ {
+ math_courses: [
+ 'College Algebra I'
+ ],
+ math_professors: [
+ 'Dr. One Two',
+ 'Dr. Five Six'
+ ]
+ }
+ ```
+
+- **Fetching the total number of enrolled students.** While the goal sounds simple, the initial model (i.e. how the data was input), does not make it straightforward to retrieve this total.
+
+ However, SurrealDB boasts the ability to remodel data *ad hoc* through queries. This means that you shouldn't have to design your tables around how you want to fetch data later. Nor should you have to use a server-side component to perform multiple queries and build up the model manually.
+
+ The most noteworthy document relation feature here is the use of `.enrollment` immediately after the deepest nested `SELECT` statement. It treats that statement in parentheses as if it were a document itself, allowing you to fetch a nested document from within it. The logic here is more akin to JavaScript than traditional SQL.
+
+ SurrealDB also includes a set of functions for things like working with arrays and applying math operations. The `array::flatten` function combines the multiple returned arrays, and then `math::sum` adds together all of the numbers in that resulting array.
+
+ ```command
+ SELECT * FROM math::sum(
+ ( array::flatten(
+ ( SELECT * FROM
+ ( SELECT courses.enrollment AS enrollment
+ FROM department )
+ ).enrollment )
+ )
+ );
+ ```
+
+ ```output
+ 70
+ ```
+
+- **Fetching percentage enrollments for each department.** SurrealDB's more advanced queries can leverage nested queries and functions to sleekly perform operations on data.
+
+ Like in the previous query, this one uses some of SurrealDB's built-in functions. The ones here perform some math operations and cast a value as a specific data type.
+
+ The structure has similarities to traditional SQL queries, but it leverages the `courses.enrollement` and `courses.course.capacity` relations, similar to the first example query above.
+
+```command
+SELECT department, type::int(
+ math::round( ( enrolled / capacity ) * 100 ) ) AS percentage_enrollment
+ FROM ( SELECT name AS department,
+ math::sum(courses.enrollment) AS enrolled,
+ math::sum(courses.course.capacity) AS capacity
+ FROM department );
+```
+
+```output
+[
+ {
+ department: 'Biological Sciences',
+ percentage_enrollment: 80
+ },
+ {
+ department: 'English',
+ percentage_enrollment: 67
+ },
+ {
+ department: 'History',
+ percentage_enrollment: 75
+ },
+ {
+ department: 'Mathematics',
+ percentage_enrollment: 83
+ }
+]
+```
+
+## Conclusion
+
+You now have a foundation in how SurrealDB employs inter-document relations and achieves its multi-model approach. The explanations and demonstrations in this tutorial aim to give you tools to use when making your own SurrealDB models. From schema definitions to queries with document and graph relations, you should be able to craft your data to your needs.
+
+Continue learning everything you need to make the most of SurrealDB with our other tutorials:
+
+- [Managing Security and Access Control for SurrealDB](/docs/guides/managing-security-and-access-for-surrealdb/)
+
+- [Building an Web Application on Top of SurrealDB](/docs/guides/surrealdb-for-web-applications)
+
+- [Deploying a SurrealDB Cluster](/docs/guides/deploy-surrealdb-cluster/)
diff --git a/docs/guides/development/concepts/working-with-graph-data-structures/Figure_1.png b/docs/guides/development/concepts/working-with-graph-data-structures/Figure_1.png
new file mode 100644
index 00000000000..663c6f3e332
Binary files /dev/null and b/docs/guides/development/concepts/working-with-graph-data-structures/Figure_1.png differ
diff --git a/docs/guides/development/concepts/working-with-graph-data-structures/Figure_2.png b/docs/guides/development/concepts/working-with-graph-data-structures/Figure_2.png
new file mode 100644
index 00000000000..0bcf7726770
Binary files /dev/null and b/docs/guides/development/concepts/working-with-graph-data-structures/Figure_2.png differ
diff --git a/docs/guides/development/concepts/working-with-graph-data-structures/Figure_3.png b/docs/guides/development/concepts/working-with-graph-data-structures/Figure_3.png
new file mode 100644
index 00000000000..704342089be
Binary files /dev/null and b/docs/guides/development/concepts/working-with-graph-data-structures/Figure_3.png differ
diff --git a/docs/guides/development/concepts/working-with-graph-data-structures/index.md b/docs/guides/development/concepts/working-with-graph-data-structures/index.md
new file mode 100644
index 00000000000..23063448073
--- /dev/null
+++ b/docs/guides/development/concepts/working-with-graph-data-structures/index.md
@@ -0,0 +1,112 @@
+---
+slug: working-with-graph-data-structures
+title: "Working with Graph Data Structures in the Real World"
+description: "This comprehensive guide delves into graph terminology, essential algorithms, weighted graphs, and their real-world applications, providing valuable insights for effective data graph usage."
+authors: ["John Mueller"]
+contributors: ["John Mueller"]
+published: 2024-05-07
+keywords: ['Directed Versus Undirected Graphs', 'DFS vs BFS', 'Weighted Graphs', 'Minimum Spanning Trees']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[Cheney’s algorithm](https://www.cs.york.ac.uk/fp/cgo/lectures/chapter10.pdf)'
+- '[Kahn’s algorithm](https://www.geeksforgeeks.org/topological-sorting-indegree-based-solution/)'
+- '[Boruvka’s algorithm](https://www.geeksforgeeks.org/boruvkas-algorithm-greedy-algo-9/)'
+- '[Prim’s algorithm](https://www.geeksforgeeks.org/prims-minimum-spanning-tree-mst-greedy-algo-5/)'
+- '[Kruskal’s algorithm](https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-algorithm-greedy-algo-2/)'
+---
+
+Graphical representations of data are an essential element of data presentation. As an analogy to demonstrate this, imagine the difficulty in navigating a city using a list of street connections rather than the graphical equivalent of a map. Data graph structures provide a pictorial presentation of the connections between nodes on a network. These pictorial presentations find use in all sorts of ways in real life, such as the GPS map in your car or a troubleshooting display of the hardware on a network.
+
+## Defining What a Graph Is All About
+
+A *graph* is a picture of interconnected data like the one shown in Figure 1. It usually relies on circles for nodes and lines to show the relationships between nodes. The nodes, also called vertices, are data points of some sort, such as a location in a city, a conversational hierarchy in email, or a list of data points as in Figure 1.
+
+
+{.flex .justify-center .items-center}
+
+*Figure 1*
+{.text-center}
+
+### A Quick Overview of Graph Terminology
+
+- **Node (vertex)**: A *data point* could be a letter, number, or special character and it may be the only element that defines nodes in a graph. When working with two-dimensional graphs, a *coordinate* provides an x and y location that orients the node with regard to other nodes in the graph space. The node normally has a name (often the data point value) to make referencing it easier.
+
+- **Edge**: A tuple containing two node names. The first node name in the tuple connects to the second node name. This preciseness of reference is important when working with *directed graphs*, those that show a direction of connection, such as that used on maps to indicate one-way streets.
+
+- **Adjacency**: Defines whether one node connects to another node through an edge. If A connects to B, then A is adjacent to B, even if A and B are widely separated pictorially in the graph space.
+
+- **Path**: The listing of adjacent node names between two nodes. For example, if A connects to B, and B connects to C, then the path between A and C is ABC.
+
+### Directed Versus Undirected Graphs
+
+Figure 2 shows an example of a graph with both directed and undirected elements. A *directed graph* is one in which a relationship between two adjacent nodes is one way, like the one-way streets on a map. An *undirected graph* is one in which there is a relationship in both directions between two adjacent nodes, such as the connection between B and C in Figure 2. In some graphs, you may actually see an undirected element shown as two edges with one edge pointing in one direction and the other edge pointing in the other direction. For example, a flow diagram would, of necessity, have to show two-way connections between nodes (such as valves). When a graph shows only undirected elements, the arrows are commonly left out, as shown in Figure 1.
+
+
+{.flex .justify-center .items-center}
+
+*Figure 2*
+{.text-center}
+
+## Essential Graph Algorithms and Approaches
+
+Creating a data graph is only part of the task. The next step is to search, evaluate, and understand the data graph using algorithms.
+
+### Depth First Search (DFS) Versus Breadth First Search (BFS)
+
+Finding what is needed is one of the primary goals of constructing a data graph. A *Depth First Search* (DFS) starts at an arbitrary point in a data graph, the source, and searches to the end of the data graph before it begins searching the next connection to the source. Using Figure 1 as a reference and A as the source, a DFS would search the nodes ABCDE first, then AC. This is how a DFS is commonly used:
+
+- Finding connected components
+- Performing topological sorting in a Directed Acyclic Graph (DAG)
+- Locating the bridges in a graph (such as the connection between A and C)
+- Solving puzzles with only one solution (such as a maze)
+
+A *Breadth First Search* (BFS) starts at an arbitrary point in a data graph and searches the neighboring nodes first, before moving on to the next level of nodes. Again using Figure 1 as a reference, the search pattern in this case would be AB, AC, and AE, then ACD and AED. There are advantages to each approach. This is how BFS is commonly used:
+
+- Performing memory garbage collection using [Cheney’s algorithm](https://www.cs.york.ac.uk/fp/cgo/lectures/chapter10.pdf)
+- Finding the shortest path between two points
+- Test if a data graph is [bipartite](https://www.techiedelight.com/bipartite-graph/)
+- Implementing a web crawler
+
+### Understanding Directed Acyclic Graphs (DAGs) and Kahn’s Algorithm
+
+Certain kinds of problem resolution require a specific kind of data graph. A *Directed Acyclic Graph* (DAG) is a directed graph that contains no cycles. Using Figure 2 as an example, there is a cycle created by the nodes ACDE because it’s possible to go around in a circle using those nodes. It’s not possible to solve problems such as spreadsheet calculations, evolution, family trees, epidemiology, citation networks, and scheduling when the data graph has cycles in it. To locate potential cycles and make data graph node access less complicated, applications perform topological sorting using algorithms such as [Kahn’s algorithm](https://www.geeksforgeeks.org/topological-sorting-indegree-based-solution/).
+
+### Working with Minimum Spanning Trees (MSTs)
+
+To solve certain problems, such as laying new cable in a neighborhood, it’s essential to calculate not only the shortest path between nodes but also the path that contains the lowest cost, perhaps the price of burying the cable. A Minimum Spanning Tree (MST) uses a data graph similar to the one shown in Figure 3 (without the cycle in it) to perform this task. The tree is a separately constructed graph that expresses the list of nodes that provide the minimum weighted values from the source to the destination. There are many different algorithms for solving the MST problem, but here are some of the most popular ones:
+
+- [Boruvka’s algorithm](https://www.geeksforgeeks.org/boruvkas-algorithm-greedy-algo-9/)
+- [Prim’s algorithm](https://www.geeksforgeeks.org/prims-minimum-spanning-tree-mst-greedy-algo-5/)
+- [Kruskal’s algorithm](https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-algorithm-greedy-algo-2/)
+
+## Considering Graph Weights
+
+A *graph weight* is a measurement placed on an edge that indicates some type of cost for that edge as shown in Figure 3. It could be a distance, amount of fuel usage, time to travel, or anything else that the designer uses to compare the paths between two points. In Figure 3 the cost of going from A to E to D (a value of 10) is more than the cost of going from A to C to D (a value of 9), even though the result (the starting point is A and the ending point is D) and the number of nodes traversed is the same. Fortunately, it isn’t necessary to calculate these costs by hand. The Choosing Between Algorithms article describes how the use of the Dijkstra, Bellman-Ford, and Floyd-Warshall algorithms locate the shortest route between nodes.
+
+
+{.flex .justify-center .items-center}
+
+*Figure 3*
+{.text-center}
+
+### Uses for Weighted Graphs
+
+Weighted graphs not only show the relationships between nodes, they also associate a cost with those relationships so it’s less complex for the viewer to objectively evaluate costs associated with moving from one node to another. A node can represent anything, as can the edges between nodes. Consequently, while many people associate weighted graphs with applications like GPS, where time, distance, and fuel costs are all important measures, the same technology may also address needs like making a decision within an organization based on goals, and the costs to achieve them. A weighted graph can also show benefits, rather than costs. A properly constructed graph shows the path most likely to produce maximum profit for a venture based on current information. The uses for graphs are limitless, depending solely on the creativity of the person developing them.
+
+### Defining Goals for Graph Weighting
+
+A graph must meet the goals set for it and those goals must be reasonable and useful. In creating a GPS application, most designers focus solely on distance, but this approach definitely leaves the user with an inaccurate view of the driving environment. A more useful GPS considers:
+
+- Distance
+- Fuel costs
+- Travel time
+- Construction delays
+- Traffic delays
+- Weather
+- Complexity (number of nodes traversed)
+
+The weighting of these considerations needs to be in the user’s hands. Perhaps the user is most interested in keeping fuel costs low, so that distance and travel time have a lower priority in determining the best route. Complexity is also an issue because some GPS routes a driver uses more turns through some of the least traveled and inhospitable areas just to save a few miles and a little time.
+
+## Conclusion
+
+Using data graphs correctly greatly enhances an organization’s ability to visualize relationships between any set of data points. Performing this analysis can require some [significant computing horsepower](https://www.linode.com/) when the number of data elements is high, but the benefits far outweigh the costs in most cases. The essential element to success is to ensure that the data graph meets the goals needed to address a particular requirement using the correct algorithm(s) for the task.
\ No newline at end of file
diff --git a/docs/guides/development/frameworks/laravel/_index.md b/docs/guides/development/frameworks/laravel/_index.md
index 8ee8675d8fe..22948ea89d9 100644
--- a/docs/guides/development/frameworks/laravel/_index.md
+++ b/docs/guides/development/frameworks/laravel/_index.md
@@ -7,5 +7,4 @@ published: 2021-06-03
keywords: ["laravel", "php"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
show_in_lists: true
----
-
+---
\ No newline at end of file
diff --git a/docs/guides/development/frameworks/laravel/how-to-create-website-using-laravel/index.md b/docs/guides/development/frameworks/laravel/how-to-create-website-using-laravel/index.md
index 2fc0a326c21..876f4977fa8 100644
--- a/docs/guides/development/frameworks/laravel/how-to-create-website-using-laravel/index.md
+++ b/docs/guides/development/frameworks/laravel/how-to-create-website-using-laravel/index.md
@@ -6,6 +6,7 @@ description: 'Learn the basics of building a website with the Laravel framework,
authors: ["Nathaniel Stickman"]
contributors: ["Nathaniel Stickman"]
published: 2021-06-04
+modified: 2021-11-07
keywords: ['laravel','php','web application','web framework','deploy a website','debian','ubuntu','centos']
tags: ['laravel', 'debian', 'ubuntu', 'centos', 'php']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
@@ -36,13 +37,13 @@ This guide is written for non-root user. Commands that require elevated privileg
### Install the Prerequisites
-1. Install PHP and Laravel's recommended PHP extensions.
+1. Install PHP and Laravel's recommended PHP extensions.
- - On Debian and Ubuntu, you can use:
+ - **Debian and Ubuntu**:
sudo apt install php php-bcmath php-common php-curl php-json php-mbstring php-mysql php-xml php-zip php8.1-fpm openssl
- - On CentOS, you need to take the additional step of adding the [Remi repository](https://rpms.remirepo.net/), since the package manager's default repositories only include PHP version 7.2.
+ - **CentOS 8**. Since the package manager's default repositories only include PHP version 7.2, you need to take the additional step of adding the [Remi repository](https://rpms.remirepo.net/).
- First, add the Remi repository:
@@ -58,11 +59,15 @@ This guide is written for non-root user. Commands that require elevated privileg
sudo dnf install php php-bcmath php-common php-json php-mbstring php-mysql php-xml php-zip curl openssl
-1. Change into the directory where you intend to keep your Laravel project's directory. In this example, you use the current user's home directory.
+ - **openSUSE**:
+
+ sudo zypper install php7 php7-bcmath php7-curl php7-fileinfo php7-json php7-mbstring php7-mysql php7-openssl php7-zip php-composer
+
+1. Change into the directory where you intend to keep your Laravel project's directory. In this example, you use the current user's home directory.
cd ~
-1. Download [Composer](https://getcomposer.org/), ensure that Composer can be used globally, and make it executable.
+1. Download [Composer](https://getcomposer.org/), ensure that Composer can be used globally, and make it executable.
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
@@ -70,30 +75,30 @@ This guide is written for non-root user. Commands that require elevated privileg
### Create a Laravel Project
-1. Create your Laravel application.
+1. Create your Laravel application.
composer create-project laravel/laravel example-app
-1. Change into the directory created for the application.
+1. Change into the directory created for the application.
cd example-app
- {{< note respectIndent=false >}}
-Unless noted otherwise, all subsequent commands in this guide assume you are still in `example-app` project directory.
-{{< /note >}}
+ {{< note >}}
+ Unless noted otherwise, all subsequent commands in this guide assume you are still in `example-app` project directory.
+ {{< /note >}}
-1. Run the PHP development server, Artisan, to verify that the Laravel setup is complete.
+1. Run the PHP development server, Artisan, to verify that the Laravel setup is complete.
php artisan serve
Artisan serves the application on `localhost:8000`. To visit the application remotely, you can use an SSH tunnel:
- - On Windows, you can use the PuTTY tool to set up your SSH tunnel. Follow the appropriate section of the [Using SSH on Windows](/docs/guides/connect-to-server-over-ssh-on-windows/#ssh-tunnelingport-forwarding) guide, replacing the example port number there with **8000**.
- - On OS X or Linux, use the example command to set up the SSH tunnel. Replace `example-user` with your username on the application server and `192.0.2.0` with the server's IP address. Ensure that you can access the server on port `8000` use the `sudo ufw allow 8000` to be enable access.
+ - On Windows, you can use the PuTTY tool to set up your SSH tunnel. Follow the appropriate section of the [Using SSH on Windows](/docs/guides/connect-to-server-over-ssh-on-windows/#ssh-tunnelingport-forwarding) guide, replacing the example port number there with **8000**.
+ - On OS X or Linux, use the example command to set up the SSH tunnel. Replace `example-user` with your username on the application server and `192.0.2.0` with the server's IP address. Ensure that you can access the server on port `8000` use the `sudo ufw allow 8000` to be enable access.
ssh -L8000:localhost:8000 example-user@192.0.2.0
-1. Now, you can visit the application in your browser by navigating to `localhost:8000`.
+1. Now, you can visit the application in your browser by navigating to `localhost:8000`.

@@ -101,115 +106,115 @@ Unless noted otherwise, all subsequent commands in this guide assume you are sti
This section shows you how to start working with Laravel's *controllers* and *views* to make your own website.
-1. Follow the steps in the [Create a Laravel Project](#create-a-laravel-project) section above to get started with a base project.
+1. Follow the steps in the [Create a Laravel Project](#create-a-laravel-project) section above to get started with a base project.
-1. This example builds a website with a **Home** page and an **About** page. Create the routes for each by opening the routes file — `~/example-app/routes/web.php` — and add the following contents:
+1. This example builds a website with a **Home** page and an **About** page. Create the routes for each by opening the routes file — `~/example-app/routes/web.php` — and add the following contents:
- {{< file "~/example-app/routes/web.php" >}}
-}}
+ Route::get('/about', [AboutController::class, 'index']);
+ ```
First, this imports the controllers—`HomeController` and `AboutController` that get created in the next two steps. Then, it routes requests to the `/home` and `/about` URLs to their respective controllers. It also includes a route to redirect traffic from the base URL (`/`) to the `/home` URL.
-1. Create the Home controller by creating an `~/example-app/app/Http/Controllers/HomeController.php` file and giving it the contents shown below:
+1. Create the Home controller by creating an `~/example-app/app/Http/Controllers/HomeController.php` file and giving it the contents shown below:
- {{< file "~/example-app/app/Http/Controllers/HomeController.php" >}}
- 'Home Page']);
+ public function index()
+ {
+ return view('home', ['title' => 'Home Page']);
+ }
}
-}
- {{< /file >}}
+ ```
This controller simply renders the Home page view and feeds a `title` parameter into it.
-1. Do the same for the About controller. In this case, the new file is `~/example-app/app/Http/Controllers/AboutController.php`. This controller serves the same function as the Home controller, however, it renders the about page view instead.
+1. Do the same for the About controller. In this case, the new file is `~/example-app/app/Http/Controllers/AboutController.php`. This controller serves the same function as the Home controller, however, it renders the about page view instead.
- {{< file "~/example-app/app/Http/Controllers/AboutController.php" >}}
- 'About Page']);
+ public function index()
+ {
+ return view('about', ['title' => 'About Page']);
+ }
}
-}
- {{< /file >}}
-
-1. This example's views share a navigation menu, so the website can use a layout template to reduce duplicate code. Create the layout template as `~/example-app/resources/views/layouts/master.blade.php`, and give it the contents shown in the example below.
-
- {{< note respectIndent=false >}}
-Before creating your layout template, you need to create the `layouts` subdirectory.
-
- mkdir ~/example-app/resources/views/layouts
-{{< /note >}}
-
- {{< file "~/example-app/resources/views/layouts/master.blade.php" >}}
-
-
- @if ($title)
- {{ $title }}
- @else
- Example Laravel App
- @endif
-
-
-
-
-
- {{< /file >}}
-
-1. Now, to create the views themselves. Create a `~/example-app/resources/views/home.blade.php` file and a `~/example-app/resources/views/about.blade.php` file. Add the contents of the example files below:
-
- {{< file "~/example-app/resources/views/home.blade.php" >}}
-@extends('layouts.master')
-
-@section('content')
-
{{ $title }}
-
This is the home page for an example Laravel web application.
This is the about page for an example Laravel web application.
-@endsection
- {{< /file >}}
+ ```
+
+1. This example's views share a navigation menu, so the website can use a layout template to reduce duplicate code. Create the layout template as `~/example-app/resources/views/layouts/master.blade.php`, and give it the contents shown in the example below.
+
+ {{< note >}}
+ Before creating your layout template, you need to create the `layouts` subdirectory.
+
+ mkdir ~/example-app/resources/views/layouts
+ {{< /note >}}
+
+ ```file {title="~/example-app/resources/views/layouts/master.blade.php"}
+
+
+ @if ($title)
+ {{ $title }}
+ @else
+ Example Laravel App
+ @endif
+
+
+
+
+
+ ```
+
+1. Now, to create the views themselves. Create a `~/example-app/resources/views/home.blade.php` file and a `~/example-app/resources/views/about.blade.php` file. Add the contents of the example files below:
+
+ ```file {title="~/example-app/resources/views/home.blade.php"}
+ @extends('layouts.master')
+
+ @section('content')
+
{{ $title }}
+
This is the home page for an example Laravel web application.
This is the about page for an example Laravel web application.
+ @endsection
+ ```
Each of these view templates first declares that it extends the `master` layout template. This lets each work within the layout, reducing the amount of code you have to rewrite and making sure the pages are consistent. Each view defines its main contents as being part of the `content` section, which was defined in the `master` layout.
-1. Run the application using the steps given at the end of the [Create a Laravel Project](#create-a-laravel-project) section above.
+1. Run the application using the steps given at the end of the [Create a Laravel Project](#create-a-laravel-project) section above.
You can now visit the website on `localhost:8000`.
@@ -221,88 +226,92 @@ While the Artisan server works well for development, it is recommended that you
These steps assume your application has the same location and name as given in the previous sections.
-1. Install NGINX.
+1. Install NGINX and php-fpm.
+
+ - On Debian and Ubuntu, use:
- - On Debian and Ubuntu, use:
+ sudo apt install nginx php7.4-fpm
- sudo apt install nginx
+ - On CentOS, use:
- - On CentOS, use:
+ sudo yum install nginx php-fpm
- sudo yum install nginx
+ - On openSUSE, use:
-1. Copy your Laravel project directory to `/var/www`.
+ sudo zypper install nginx php-fpm
+
+1. Copy your Laravel project directory to `/var/www`.
sudo cp -R ~/example-app /var/www
-1. Give the `www-data` user ownership of the project's `storage` subdirectory.
+1. Give the `www-data` user ownership of the project's `storage` subdirectory.
sudo chown -R www-data.www-data /var/www/example-app/storage
-1. Create an NGINX configuration file for the website, and add the contents shown below. Replace `example.com` with your server's domain name.
+1. Create an NGINX configuration file for the website, and add the contents shown below. Replace `example.com` with your server's domain name.
- {{< file "/etc/nginx/sites-available/example-app" >}}
-server {
- listen 80;
- server_name example.com;
- root /var/www/example-app/public;
+ ```file {title="/etc/nginx/sites-available/example-app"}
+ server {
+ listen 80;
+ server_name example.com;
+ root /var/www/example-app/public;
- add_header X-Frame-Options "SAMEORIGIN";
- add_header X-Content-Type-Options "nosniff";
+ add_header X-Frame-Options "SAMEORIGIN";
+ add_header X-Content-Type-Options "nosniff";
- index index.php;
+ index index.php;
- charset utf-8;
+ charset utf-8;
- location / {
- try_files $uri $uri/ /index.php?$query_string;
- }
+ location / {
+ try_files $uri $uri/ /index.php?$query_string;
+ }
- location = /favicon.ico { access_log off; log_not_found off; }
- location = /robots.txt { access_log off; log_not_found off; }
+ location = /favicon.ico { access_log off; log_not_found off; }
+ location = /robots.txt { access_log off; log_not_found off; }
- error_page 404 /index.php;
+ error_page 404 /index.php;
- location ~ \.php$ {
- fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
- fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
- include fastcgi_params;
- }
+ location ~ \.php$ {
+ fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
+ fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
+ include fastcgi_params;
+ }
- location ~ /\.(?!well-known).* {
- deny all;
+ location ~ /\.(?!well-known).* {
+ deny all;
+ }
}
-}
- {{< /file >}}
+ ```
-1. Create a symbolic link of the configuration file in the NGINX `sites-enabled` directory. You can also remove the `default` site configuration from this directory.
+1. Create a symbolic link of the configuration file in the NGINX `sites-enabled` directory. You can also remove the `default` site configuration from this directory.
sudo ln -s /etc/nginx/sites-available/example-app /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
-1. Verify the NGINX configuration.
+1. Verify the NGINX configuration.
sudo nginx -t
-1. Enable and start the NGINX service.
+1. Enable and start the NGINX service.
sudo systemctl enable nginx
sudo systemctl start nginx
-1. Similarly, enable and start the PHP-FPM service, which NGINX uses to run your application.
+1. Similarly, enable and start the PHP-FPM service, which NGINX uses to run your application.
- - On Debian and Ubuntu, use:
+ - On Debian and Ubuntu, use:
sudo systemctl enable php7.4-fpm
sudo systemctl start php7.4-fpm
- - On CentOS, use:
+ - On CentOS, use:
sudo systemctl enable php-fpm
sudo systemctl start php-fpm
-1. Your application should now be running — visit it by navigating to your server's domain name in your browser. Make sure you prefix your domain name with `http` rather than `https`, as the server has not been set up with an [SSL certificate](/docs/guides/security/ssl/).
+1. Your application should now be running — visit it by navigating to your server's domain name in your browser. Make sure you prefix your domain name with `http` rather than `https`, as the server has not been set up with an [SSL certificate](/docs/guides/security/ssl/).
## Conclusion
-You now have your own Laravel website up and running! To build on what you created by following this guide, be sure to take a look through [Laravel's documentation](https://laravel.com/docs/8.x). There, you can find plenty of information to dive deeper into the components of a Laravel application.
+You now have your own Laravel website up and running! To build on what you created by following this guide, be sure to take a look through [Laravel's documentation](https://laravel.com/docs/8.x). There, you can find plenty of information to dive deeper into the components of a Laravel application.
\ No newline at end of file
diff --git a/docs/guides/development/javascript/build-mern-stack-chat-application/examples/client/App.js b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/client/App.js
new file mode 100644
index 00000000000..06b3b6eb5ae
--- /dev/null
+++ b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/client/App.js
@@ -0,0 +1,19 @@
+// Import React and the stylesheet.
+import React from 'react';
+import './App.css';
+
+// Import the component to be used for fetching, posting,
+// and displaying messages from the server.
+import Messages from './Messages';
+
+// Initialize the application display, giving a
+// placeholder for the Messages component.
+function App() {
+ return (
+
+
+
+ );
+}
+
+export default App;
diff --git a/docs/guides/development/javascript/build-mern-stack-chat-application/examples/client/Messages.js b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/client/Messages.js
new file mode 100644
index 00000000000..1debe58b660
--- /dev/null
+++ b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/client/Messages.js
@@ -0,0 +1,79 @@
+// Import React's Component and Axios.
+import React, { Component } from 'react';
+import axios from 'axios';
+
+// Create the component for handling messages.
+class Messages extends Component {
+ // Create an object to hold the list of messages and the message
+ // being prepared for sending.
+ state = {
+ list: [],
+ toSend: ""
+ };
+
+ // When the component loads, get existing messages from the server.
+ componentDidMount() {
+ this.fetchMessages();
+ }
+
+ // Get messages from the server.
+ fetchMessages = () => {
+ axios
+ .get('/messages')
+ .then((res) => {
+ if (res.data) {
+ this.setState({ list: res.data, toSend: "" });
+ let inputField = document.getElementById("textInputField");
+ inputField.value = "";
+ } else {
+ this.setState({ list: ["No messages!"] });
+ }
+ })
+ .catch((err) => console.log(err));
+ }
+
+ // Post new messages to the server, and make a call to update
+ // the list of messages.
+ sendMessage = () => {
+ if (this.state.toSend === "") {
+ console.log("Enter message text.")
+ } else {
+ axios
+ .post('/messages', { messageText: this.state.toSend })
+ .then((res) => {
+ if (res.data) {
+ this.fetchMessages();
+ }
+ })
+ .catch((err) => console.log(err));
+ }
+ }
+
+ // Display the list of messages.
+ listMessages = () => {
+ if (this.state.list && this.state.list.length > 0) {
+ return (this.state.list.map((message) => {
+ return (
+
+ );
+ }
+}
+
+export default Messages;
diff --git a/docs/guides/development/javascript/build-mern-stack-chat-application/examples/server/index.js b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/server/index.js
new file mode 100644
index 00000000000..f99d721807e
--- /dev/null
+++ b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/server/index.js
@@ -0,0 +1,54 @@
+// Set up ExpressJS.
+const express = require("express");
+const bodyParser = require('body-parser');
+const app = express();
+const router = express.Router();
+const port = 5000;
+
+// Set up Mongoose.
+const mongoose = require('mongoose');
+const mongoDbUrl = 'mongodb://127.0.0.1/example_database';
+
+// Import MongoDB models.
+const MessageModel = require('./models/message.js');
+
+// Connect to the database.
+mongoose
+ .connect(mongoDbUrl, {useNewUrlParser: true, useUnifiedTopology: true})
+ .then(() => console.log('Database connection established.'))
+ .catch((err) => console.log('Database connection error: ' + err))
+mongoose.Promise = global.Promise;
+
+// Prevent possible cross-origin issues.
+app.use((req, res, next) => {
+ res.header('Access-Control-Allow-Origin', '*');
+ res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
+ next();
+});
+
+// Necessary to handle the JSON data.
+app.use(bodyParser.json());
+
+// Create endpoints for the frontend to access.
+app.get('/messages', (req, res, next) => {
+ MessageModel
+ .find({}, 'messageText')
+ .then((data) => res.json(data))
+ .catch(next);
+});
+
+app.post('/messages', (req, res, next) => {
+ if (req.body.messageText) {
+ MessageModel.create(req.body)
+ .then((data) => res.json(data))
+ .catch(next);
+ } else {
+ res.json({error: "Please provide message text."});
+ }
+});
+
+// Listen on the port.
+app.listen(port, () => {
+ console.log(`Server is running on port: ${port}`);
+});
+
diff --git a/docs/guides/development/javascript/build-mern-stack-chat-application/examples/server/message.js b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/server/message.js
new file mode 100644
index 00000000000..e331822d267
--- /dev/null
+++ b/docs/guides/development/javascript/build-mern-stack-chat-application/examples/server/message.js
@@ -0,0 +1,16 @@
+// Set up Mongoose.
+const mongoose = require('mongoose');
+const Schema = mongoose.Schema;
+
+// Create a schema to be used for the MessageModel.
+const MessageSchema = new Schema({
+ messageText: {
+ type: String,
+ required: [true, 'This fields is required.'],
+ },
+});
+
+// Create the message model from the MessageSchema.
+const MessageModel = mongoose.model('message', MessageSchema);
+
+module.exports = MessageModel;
diff --git a/docs/guides/development/javascript/build-mern-stack-chat-application/index.md b/docs/guides/development/javascript/build-mern-stack-chat-application/index.md
new file mode 100644
index 00000000000..2b254f5e0d2
--- /dev/null
+++ b/docs/guides/development/javascript/build-mern-stack-chat-application/index.md
@@ -0,0 +1,480 @@
+---
+slug: build-mern-stack-chat-application
+title: "Build a Basic Chat Application using the MERN Stack"
+description: "Learn how to develop a MERN stack app for an Ubuntu or Debian server."
+authors: ["Nathaniel Stickman"]
+contributors: ["Nathaniel Stickman"]
+published: 2023-09-14
+modified: 2024-05-06
+keywords: ['mern stack','mern tutorial','mern app']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[MongoDB: How to Use MERN Stack](https://www.mongodb.com/languages/mern-stack-tutorial)'
+- '[Mozilla Developer Network Web Docs: Express Tutorial Part 3: Using a Database (with Mongoose)](https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/mongoose)'
+---
+
+MERN is a modern web application development stack consisting of MongoDB, Express JS, React, and Node.js. Its bundle of robust and well-supported open-source software provides a solid foundation for building a wide range of web applications.
+
+The MERN stack has the advantage of using React. Other variants exist, like the MEAN stack (which uses Angular) and the MEVN stack (which uses Vue). But with React, you get the advantage of server-side rendering and improved availability for web crawlers.
+
+This MERN tutorial helps you get started building a MERN app of your own for an Ubuntu 20.04 or Debian 10 server.
+
+## Before You Begin
+
+1. Familiarize yourself with our [Getting Started with Linode](/docs/getting-started/) guide, and complete the steps for setting your Linode's hostname and timezone.
+
+1. This guide uses `sudo` wherever possible. Complete the sections of our [How to Secure Your Server](/docs/security/securing-your-server/) guide to create a standard user account, harden SSH access, and remove unnecessary network services.
+
+1. Update your system using the following command:
+
+ ```command
+ sudo apt update && sudo apt upgrade
+ ```
+
+{{< note >}}
+The steps in this guide are written for non-root users. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Linux Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## What Is MERN Stack?
+
+A MERN architecture is a full-stack framework for developing modern web applications. It is a variation of the MEAN stack but replaces Angular (the **A**) with React.
+
+A MERN stack is made up of the following components:
+
+- [MongoDB](https://www.mongodb.com/) document database
+- [Express JS](https://expressjs.com/) server-side framework
+- [React](https://reactjs.org/) client-side framework
+- [Node](https://nodejs.org/en/about/) web server
+
+Each of these technologies is well-supported and offers robust features. This makes a MERN stack a good choice for developing new web applications.
+
+## How to Develop a MERN App
+
+This section walks you through installing MongoDB and Node.js then setting up an Express JS server and React frontend. By the end, you have a complete MERN app, ready to be customized and expanded to your needs.
+
+After building the application, the last section of this guide shows you how to start up your MERN stack and test it out.
+
+### Install the Prerequisites
+
+Two of the MERN components should be installed before you start on your project: MongoDB and Node.js. Once you have them installed, you can create a project, where you install Express JS and React as dependencies.
+
+#### Install MongoDB
+
+1. Install `gnupg` using the following command:
+
+ ```command
+ sudo apt install gnupg
+ ```
+
+1. Import the GPG key for MongoDB.
+
+ ```command
+ wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
+ ```
+
+1. Add the MongoDB package list to APT.
+
+ {{< tabs >}}
+ {{< tab "Debian 10 (Buster)" >}}
+ ```command
+ echo "deb http://repo.mongodb.org/apt/debian buster/mongodb-org/5.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
+ ```
+ {{< /tab >}}
+ {{< tab "Ubuntu 20.04 (Focal)" >}}
+ ```command
+ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
+ ```
+ {{< /tab >}}
+ {{< /tabs >}}
+
+1. Update the APT package index using the following command:
+
+ ```command
+ sudo apt update
+ ```
+
+1. Install MongoDB using the following command:
+
+ ```command
+ sudo apt install mongodb-org
+ ```
+
+See the official documentation for more on installing MongoDB [on Debian](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/) and [on Ubuntu](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/). You can also refer to our [How To Install MongoDB on Ubuntu 16.04](/docs/guides/install-mongodb-on-ubuntu-16-04/) guide.
+
+#### Install Node.js
+
+1. Install the Node Version Manager, the preferred method for installing Node.js.
+
+ ```command
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
+ ```
+
+1. Restart your shell session (logging out and logging back in), or run the following commands:
+
+ ```command
+ export NVM_DIR="$HOME/.nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+ ```
+
+1. Install the current version of Node.js:
+
+ ```command
+ nvm install node
+ ```
+
+You can additionally refer to our [How to Install and Use the Node Package Manager (NPM) on Linux](/docs/guides/install-and-use-npm-on-linux/#how-to-install-or-update-npm) guide.
+
+### Developing the App
+
+The MERN app project itself consists of two components:
+
+- Express JS provides the backend web API, connecting to MongoDB to store and retrieve data
+- React provides the frontend, giving the user interface for interacting with the application
+
+The next sections show you how to set these up for a basic chat application.
+
+#### Create the Express JS Server
+
+1. Create a directory for your project, and then a subdirectory for your Express JS server. Then, change into the Express JS subdirectory.
+
+ This example creates a project directory under the current user's home directory and an Express JS subdirectory named `server`.
+
+ ```command
+ mkdir -p ~/example-mern-app/server
+ cd ~/example-mern-app/server
+ ```
+
+1. Initialize a Node.js project, and install Express JS. At the same time, install the Mongoose module for working with MongoDB:
+
+ ```command
+ npm init -y
+ npm install --save express mongoose
+ ```
+
+1. Create an `index.js` file, and give it the contents shown below. The purpose of each part of this code is elaborated in comments within the code:
+
+ ```file {title="index.js" lang="js"}
+ // Set up ExpressJS.
+ const express = require("express");
+ const bodyParser = require('body-parser');
+ const app = express();
+ const router = express.Router();
+ const port = 5000;
+
+ // Set up Mongoose.
+ const mongoose = require('mongoose');
+ const mongoDbUrl = 'mongodb://127.0.0.1/example_database';
+
+ // Import MongoDB models.
+ const MessageModel = require('./models/message.js');
+
+ // Connect to the database.
+ mongoose
+ .connect(mongoDbUrl, {useNewUrlParser: true, useUnifiedTopology: true})
+ .then(() => console.log('Database connection established.'))
+ .catch((err) => console.log('Database connection error: ' + err))
+ mongoose.Promise = global.Promise;
+
+ // Prevent possible cross-origin issues.
+ app.use((req, res, next) => {
+ res.header('Access-Control-Allow-Origin', '*');
+ res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
+ next();
+ });
+
+ // Necessary to handle the JSON data.
+ app.use(bodyParser.json());
+
+ // Create endpoints for the frontend to access.
+ app.get('/messages', (req, res, next) => {
+ MessageModel
+ .find({}, 'messageText')
+ .then((data) => res.json(data))
+ .catch(next);
+ });
+
+ app.post('/messages', (req, res, next) => {
+ if (req.body.messageText) {
+ MessageModel.create(req.body)
+ .then((data) => res.json(data))
+ .catch(next);
+ } else {
+ res.json({error: "Please provide message text."});
+ }
+ });
+
+ // Listen on the port.
+ app.listen(port, () => {
+ console.log(`Server is running on port: ${port}`);
+ });
+ ```
+
+ You can, alternatively, download the [above file](examples/server/index.js) directly.
+
+1. Create a `models` directory, and create a `message.js` file in it. Give the file the following contents:
+
+ ```file {title="models/message.js" lang="js"}
+ // Set up Mongoose.
+ const mongoose = require('mongoose');
+ const Schema = mongoose.Schema;
+
+ // Create a schema to be used for the MessageModel.
+ const MessageSchema = new Schema({
+ messageText: {
+ type: String,
+ required: [true, 'This fields is required.'],
+ },
+ });
+
+ // Create the message model from the MessageSchema.
+ const MessageModel = mongoose.model('message', MessageSchema);
+
+ module.exports = MessageModel;
+ ```
+
+ As above, you can also download [this file](examples/server/message.js) directly.
+
+1. To verify that everything is working, start the MongoDB service and run the Express JS server with the following commands:
+
+ ```command
+ sudo systemctl start mongod
+ node index.js
+ ```
+
+ Upon execution of the commands above, you should observe the following output:
+
+ ```output
+ Server is running on port: 5000
+ Database connection established.
+ ```
+
+1. You can make sure that the endpoints are working with the following two cURL commands:
+
+ ```command
+ curl -X POST -H "Content-Type: application/json" -d '{"messageText":"This is a test."}' localhost:5000/messages
+ curl localhost:5000/messages
+ ```
+
+ Upon execution of the commands above, you should observe the following output:
+
+ ```output
+ [{"_id":"61784e4251b2842f3ffe6eaf", "messageText":"This is a test."}]
+ ```
+
+ Once you have seen that the server is working, you can stop it with the Ctrl + C combination.
+
+1. To run the Express JS server and React simultaneously, the [Start MERN Stack Services](#start-mern-stack-services) section below uses the `concurrently` Node.js module. Install the module using the command below:
+
+ ```command
+ npm install --save-dev concurrently
+ ```
+
+1. Open the `package.json` file, and change the `scripts` portion as shown below. This allows you to start the server and frontend simultaneously with a single command:
+
+ ```file {title="package.json" lang="json"}
+ {
+ //...
+ "scripts": {
+ "server": "node index.js",
+ "client": "cd ../client && npm start",
+ "app_stack": "concurrently \"npm run server\" \"npm run client\""
+ },
+ //...
+ }
+ ```
+
+You can learn more about getting started with Express JS in our guide [Express JS Tutorial: Get Started Building a Website](/docs/guides/express-js-tutorial/).
+
+#### Create the React Frontend
+
+1. Change into the main project directory, and use the React project creation tool to initialize a React app. This example names the React project `client`.
+
+ ```command
+ cd ~/example-mern-app
+ npx create-react-app client
+ ```
+
+1. Change into the new React directory, and install the `axios` module. This facilitates making requests from the frontend to the Express JS server.
+
+ ```command
+ npm install --save axios
+ ```
+
+1. Open the `App.js` file, and give it the contents shown below. This requires you to delete the file's existing contents. You can find comments throughout the code elaborating on each part.
+
+ ```file {title="App.js" lang="js"}
+ // Import React and the stylesheet.
+ import React from 'react';
+ import './App.css';
+
+ // Import the component to be used for fetching, posting,
+ // and displaying messages from the server.
+ import Messages from './Messages';
+
+ // Initialize the application display, giving a
+ // placeholder for the Messages component.
+ function App() {
+ return (
+
+
+
+ );
+ }
+
+ export default App;
+ ```
+
+ You can also download [this file](examples/client/App.js) directly.
+
+1. Create a `Messages.js` file, and give it the contents shown below:
+
+ ```file {title="Messages.js" lang="js"}
+ // Import React's Component and Axios.
+ import React, { Component } from 'react';
+ import axios from 'axios';
+
+ // Create the component for handling messages.
+ class Messages extends Component {
+ // Create an object to hold the list of messages and the message
+ // being prepared for sending.
+ state = {
+ list: [],
+ toSend: ""
+ };
+
+ // When the component loads, get existing messages from the server.
+ componentDidMount() {
+ this.fetchMessages();
+ }
+
+ // Get messages from the server.
+ fetchMessages = () => {
+ axios
+ .get('/messages')
+ .then((res) => {
+ if (res.data) {
+ this.setState({ list: res.data, toSend: "" });
+ let inputField = document.getElementById("textInputField");
+ inputField.value = "";
+ } else {
+ this.setState({ list: ["No messages!"] });
+ }
+ })
+ .catch((err) => console.log(err));
+ }
+
+ // Post new messages to the server, and make a call to update
+ // the list of messages.
+ sendMessage = () => {
+ if (this.state.toSend === "") {
+ console.log("Enter message text.")
+ } else {
+ axios
+ .post('/messages', { messageText: this.state.toSend })
+ .then((res) => {
+ if (res.data) {
+ this.fetchMessages();
+ }
+ })
+ .catch((err) => console.log(err));
+ }
+ }
+
+ // Display the list of messages.
+ listMessages = () => {
+ if (this.state.list && this.state.list.length > 0) {
+ return (this.state.list.map((message) => {
+ return (
+
+ );
+ }
+ }
+
+ export default Messages;
+ ```
+ As before, you can also download [this file](examples/client/Messages.js) directly.
+
+1. Open the `package.json` file, and add the line shown below. This allows you to use shorthand for the Express JS server endpoints, which you can see in the `Messages.js` file above.
+
+ ```file {title="package.json" lang="json"}
+ {
+ //...
+ "proxy": "localhost:5000"
+ }
+ ```
+
+1. Verify that the frontend is working by starting it up using the following command:
+
+ ```command
+ npm start
+ ```
+
+ You can see the front end by navigating to `localhost:3000`.
+
+ Stop the frontend server at any time with the Ctrl + C key combination.
+
+To learn more about building applications with React, refer to the [official documentation](https://reactjs.org/docs/getting-started.html).
+
+### Start MERN Stack Services
+
+With the prerequisites installed and the project set up, you can now start up your new MERN app. These steps show you how to get all of the necessary parts running and then connect to your application, even remotely.
+
+1. Start the MongoDB service.
+
+ ```command
+ sudo systemctl start mongod
+ ```
+
+1. Enable the legacy OpenSSL provider in Node.js, required to run React:
+
+ ```command
+ export NODE_OPTIONS=--openssl-legacy-provider
+ ```
+
+ To make this configuration persistent, add the line above to your `~/.bashrc` file.
+
+1. Change into the project's `server` directory, and execute the command below:
+
+ ```command
+ npm run app_stack
+ ```
+
+ Your MERN stack application should now be running. Access the frontend by navigating to `localhost:3000` in a browser. You can access the application remotely using an SSH tunnel:
+
+ - On **Windows**, use the PuTTY tool to set up your SSH tunnel. Follow the appropriate section of the [Setting up an SSH Tunnel with Your Linode for Safe Browsing](/docs/guides/setting-up-an-ssh-tunnel-with-your-linode-for-safe-browsing/#windows) guide, replacing the example port number there with `3000`.
+
+ - On **macOS** or **Linux**, use the following command to set up the SSH tunnel. Replace `example-user` with your username on the application server and `192.0.2.0` with the server's IP address.
+
+ ```command
+ ssh -L3000:localhost:3000 example-user@192.0.2.0
+ ```
+
+ 
+
+When you are ready to make your application accessible to the public, take a look at our [Deploying a React Application on Debian 10](/docs/guides/how-to-deploy-a-react-app-on-debian-10/) guide. Specifically, the [Configure your Web Server](/docs/guides/how-to-deploy-a-react-app-on-debian-10/#configure-your-web-server) and [Create your Deployment Script](/docs/guides/how-to-deploy-a-react-app-on-debian-10/#create-your-deployment-script) sections give you the additional steps you need to make your React frontend available.
+
+## Conclusion
+
+You now have a working MERN stack application. This code above can form a basis that you can modify and expand on to your needs.
+
+Ready to deploy your MERN stack app to a server? Refer to our [Deploy a MERN Stack Application on Akamai](/docs/guides/deploy-a-mern-stack-application/) guide. There, you can learn how to set up a server for a MERN stack and copy over your MERN project for deployment.
+
+One way you can enhance your MERN stack app is by adding authentication. Learn how to implement authentication into your Express JS server through our [User Authentication with JSON Web Tokens (JWTs) and Express](/docs/guides/how-to-authenticate-using-jwt/) guide.
\ No newline at end of file
diff --git a/docs/guides/development/javascript/build-mern-stack-chat-application/mern-app-example.png b/docs/guides/development/javascript/build-mern-stack-chat-application/mern-app-example.png
new file mode 100644
index 00000000000..9c2a844f0ac
Binary files /dev/null and b/docs/guides/development/javascript/build-mern-stack-chat-application/mern-app-example.png differ
diff --git a/docs/guides/development/javascript/deploy-a-mern-stack-application/index.md b/docs/guides/development/javascript/deploy-a-mern-stack-application/index.md
new file mode 100644
index 00000000000..ee1fcec3af1
--- /dev/null
+++ b/docs/guides/development/javascript/deploy-a-mern-stack-application/index.md
@@ -0,0 +1,379 @@
+---
+slug: deploy-a-mern-stack-application
+title: "Deploy a MERN Stack Application on Akamai"
+description: "Learn how to deploy a locally developed MERN stack app to Akamai two different ways."
+authors: ["Nathaniel Stickman", "Linode"]
+contributors: ["Nathaniel Stickman", "Linode"]
+published: 2023-09-14
+modified: 2024-05-06
+keywords: ['deploy react app','mern stack','how to deploy react app']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[MongoDB: MERN Stack Explained](https://www.mongodb.com/mern-stack)'
+- '[GitHub: rfdickerson/mern-example: MERN Stack Starter](https://github.com/rfdickerson/mern-example)'
+---
+
+MERN is a stack for modern web applications. It consists of MongoDB, Express JS, React, and Node.js — all well-established open-source technologies that make a solid foundation for new web applications.
+
+This guide helps you deploy your existing MERN stack project onto Akamai cloud compute, using the MERN Marketplace App or by manually installing the MERN stack on a new Compute Instance. After your server is set up, learn how to copy your project to your server. If you do not yet have an existing project and wish to create a new MERN application, review one of the following guides instead:
+
+- [Install the MERN Stack and Create an Example Application](/docs/guides/install-the-mern-stack/)
+
+- [Build a Basic Chat Application using the MERN Stack](/docs/guides/build-mern-stack-chat-application/)
+
+## Before You Begin
+
+1. Familiarize yourself with our [Getting Started with Linode](/docs/getting-started/) guide, and complete the steps for setting your Linode's hostname and timezone.
+
+1. This guide uses `sudo` wherever possible. Complete the sections of our [How to Secure Your Server](/docs/security/securing-your-server/) guide to create a standard user account, harden SSH access, and remove unnecessary network services.
+
+1. Update your system using the following command:
+
+ ```command
+ sudo apt update && sudo apt upgrade
+ ```
+
+{{< note >}}
+The steps in this guide are written for non-root users. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Linux Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## What Is the MERN Stack?
+
+A MERN architecture is a full-stack framework for developing modern web applications. It is a variation of the MEAN stack, but replaces Angular (the **A**) with React.
+
+A MERN stack is made up of the following components:
+
+- [MongoDB](https://www.mongodb.com/) document database
+- [Express JS](https://expressjs.com/) server-side framework
+- [React](https://reactjs.org/) client-side framework
+- [Node](https://nodejs.org/en/about/) web server
+
+Each of these technologies is well-supported and offers robust features. This makes a MERN stack a good choice for developing new web applications.
+
+As noted above, other variants exist, like the MEAN stack (which uses Angular) and the MEVN stack (which uses Vue). But MERN uses React, so you get the advantages of its server-side rendering and improved availability for web crawlers.
+
+## Deploy the MERN Stack on Akamai
+
+To deploy a functional MERN stack on a server, select from one of the deployment options below:
+
+- **Linode Marketplace:** Deploy the [MERN App](https://www.linode.com/marketplace/apps/linode/mern/) through the Linode Marketplace to automatically install MongoDB, Node.JS, Express, and React. This is the easiest method and enables you to quickly get up and running without needing to install and configure everything manually. Just note, when choosing this method you are limited to the distribution images supported by the Marketplace App.
+
+- **Manual Installation:** If you wish to have full control over application versions and the initial configuration, you can manually install all required components. To do so, follow the [Manually Install the MERN Stack](#manually-install-the-mern-stack) section below.
+
+### Manually Install the MERN Stack
+
+To install the components for a MERN stack yourself, you can follow the steps below. These walk you through installing MongoDB and Node.js and adding Express JS and React to your project if they are not already added.
+
+Further on, you can also see how to start up your MERN stack application once all of the components have been installed. By the end, you have a functioning MERN application running on your server.
+
+To get started, you need to install each of the components that make up a MERN stack. For Express JS and React, this typically means starting a Node.js project and setting up Express JS and React as dependencies.
+
+### Install MongoDB
+
+1. Install `gnupg` using the following command:
+
+ ```command
+ sudo apt install gnupg
+ ```
+
+1. Import the GPG key for MongoDB.
+
+ ```command
+ wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
+ ```
+
+1. Add the MongoDB package list to APT.
+
+ {{< tabs >}}
+ {{< tab "Debian 10 (Buster)" >}}
+ ```command
+ echo "deb http://repo.mongodb.org/apt/debian buster/mongodb-org/5.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
+ ```
+ {{< /tab >}}
+ {{< tab "Ubuntu 20.04 (Focal)" >}}
+ ```command
+ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
+ ```
+ {{< /tab >}}
+ {{< /tabs >}}
+
+1. Update the APT package index using the following command:
+
+ ```command
+ sudo apt update
+ ```
+
+1. Install MongoDB using the following command:
+
+ ```command
+ sudo apt install mongodb-org
+ ```
+
+See the official documentation for more on installing MongoDB [on Debian](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/) and [on Ubuntu](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/). You can also refer to our guide [How To Install MongoDB on Ubuntu 16.04](/docs/guides/install-mongodb-on-ubuntu-16-04/).
+
+### Install Node.js
+
+1. Install the Node Version Manager, the preferred method for installing Node.js.
+
+ ```command
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
+ ```
+
+1. Restart your shell session (logging out and logging back in), or run the following commands:
+
+ ```command
+ export NVM_DIR="$HOME/.nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+ ```
+
+1. Install the current version of Node.js:
+
+ ```command
+ nvm install node
+ ```
+
+1. If your project uses the Yarn package manager instead of NPM, you need to install Yarn as well. You can do so with:
+
+ ```command
+ npm install -g yarn
+ ```
+
+You can additionally refer to our [How to Install and Use the Node Package Manager (NPM) on Linux](/docs/guides/install-and-use-npm-on-linux/#how-to-install-or-update-npm) guide. If you are interested in using Yarn instead of NPM, take a look at our [How to Install and Use the Yarn Package Manager](/docs/guides/install-and-use-the-yarn-package-manager/) guide.
+
+### Install Express JS
+
+If you have an existing MERN project using Express JS, you only need to install the project's Node.js dependencies. Doing so is covered in the [Upload Your Application](#upload-your-application) section.
+
+Otherwise, you can add Express JS as a dependency to your NPM project using this command. This also adds the Mongoose module, which is typically the module used for connecting to MongoDB from Express JS.
+
+```command
+npm install --save express mongoose
+```
+
+If you are working on a Yarn project, use the command below instead:
+
+```command
+yarn add express mongoose
+```
+
+Learn more about getting started with Express JS in our guide [Express JS Tutorial: Get Started Building a Website](/docs/guides/express-js-tutorial/).
+
+### Install React (if necessary for server-side rendering)
+
+As with Express JS, you only need to install your Node.js dependencies if you already have React in your existing MERN project. This guide covers installing those dependencies in the [Upload Your Application](#upload-your-application) section.
+
+Otherwise, you can add React to your NPM project with a command like the one here. This also includes the Axios module, typically used for communications between React and the Express JS server.
+
+```command
+npm install --save react react-dom axios
+```
+
+Alternatively, use a command like the next one if your project uses Yarn instead of NPM.
+
+```command
+yarn add react react-dom axios
+```
+
+Find out more about building applications with React from the [official documentation](https://reactjs.org/docs/getting-started.html) and in our guide [Deploying a React Application on Debian 10](/docs/guides/how-to-deploy-a-react-app-on-debian-10/#create-an-example-react-app).
+
+## Upload Your Application
+
+There are two recommended methods for getting your locally-developed MERN project onto your server instance:
+
+- Copy your code to the server over SSH. You can use the `scp` command to do so, even on Windows. This method works well if you subsequently intend to work with the project files on the server exclusively.
+
+- House your MERN stack code in a remote Git repository. Then, pull your code down from the remote repository to your server. While requiring more effort to set up, this method helps keep your project consistent as you work on it across multiple machines.
+
+Below, you can find instructions for each of these methods.
+
+### Copy a Project to a Server Using SCP
+
+To follow along, you can download the [MERN stack starter](https://github.com/rfdickerson/mern-example) project, a small project demonstrating how a MERN stack application works.
+
+1. Using `scp`, copy your project's directory to the server.
+
+ - On **Linux** and **macOS**, execute a command like the one below. Replace the path to your MERN project directory with the actual path. Likewise, replace `example-user` with your user on the server instance and `192.0.2.0` with the instance's IP address.
+
+ ```command
+ scp -r ~/mern-example example-user@192.0.2.0:~/
+ ```
+
+ - On **Windows**, you first need to open port **22** on the server instance. Log into your server instance, and use UFW to open port **22**.
+
+ ```command
+ sudo ufw allow 22
+ sudo ufw reload
+ ```
+
+ The above commands require you to have the UFW utility installed. It comes pre-installed if you use the Linode Marketplace one-click app. Otherwise, you can learn how to use UFW from our [How to Secure Your Server](/docs/security/securing-your-server/) guide discussed above.
+
+ You can now use `scp` from your Windows machine, with a command like the one below. Replace the path to your MERN project folder with the actual path. Likewise, replace `example-user` with your user on the server instance and `192.0.2.0` with the instance's IP address:
+
+ ```command
+ scp -r "C:\mern-example" example-user@192.0.2.0:~/
+ ```
+
+1. Delete the `node_modules` directory from the copy of the project on your server. It is best to reinstall these due to potential system differences affecting the modules. Replace the path given below with the actual path to your project's `node_modules` directory.
+
+ ```command
+ rm -r ~/mern-example/node_modules
+ ```
+
+ Your project may have more than one such directory, depending on whether the Express JS and React portions were created as separate NPM/Yarn projects. Be sure to remove each `node_modules` directory.
+
+### Set Up Git Version Control for Your Project
+
+Take a look at our guide [Introduction to Version Control](/docs/guides/introduction-to-version-control/#installing-git) to learn more about using Git for version control.
+
+The examples in the steps below use GitHub. They assume you have a GitHub account and have created a blank repository on GitHub for pushing your MERN project. You can learn how to create a repository on GitHub using the steps in GitHub's [official documentation](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository).
+
+This first set of steps needs to be taken on your local machine. It sets up your project as a Git repository and pushes it to the remote repository on GitHub.
+
+1. Ensure that Git is installed.
+
+ - On **Linux** systems, you can use your package manager. For instance, on **Debian** and **Ubuntu** use the following command:
+
+ ```command
+ sudo apt install git
+ ```
+
+ - On **macOS**, running the Git command should prompt you to install Git if it is not already installed.
+
+ ```command
+ git --version
+ ```
+
+ - On **Windows**, download the Git binary from the [official website](https://git-scm.com/download/win).
+
+1. Change into your project's directory, and make your project a Git repository if it is not already one. This example assumes your project is in the `mern-example` directory in your current user's home directory.
+
+ ```command
+ cd ~/mern-example
+ git init
+ ```
+
+1. Create a `.gitignore` file at the base of your project. If there are files or directories you do not want to be added to the remote Git repository, add patterns matching those files/directories to the `.gitignore` file. This should include a `/node_modules` pattern to ensure that the Node.js modules do not get carried over.
+
+ As an example, here is a typical `.gitignore` for a Node.js project.
+
+ ```file {title=".gitignore"}
+ .DS_STORE
+ /node_modules
+ /build
+ logs
+ *.log
+ npm-debug.log*
+ ```
+
+1. Add your project's files for staging to your first Git commit.
+
+ ```command
+ git add .
+ ```
+
+1. Commit the files. It is recommended that you add a brief descriptive comment to each commit you make, like below:
+
+ ```command
+ git commit -m "Initial commit."
+ ```
+
+1. Add the remote repository. Replace the URL in the example below with the URL for your remote repository.
+
+ ```command
+ git remote add origin https://github.com/example-user/example-repository.git
+ ```
+
+1. Push your local project to the remote repository.
+
+ ```command
+ git push -u origin master
+ ```
+
+These next steps then need to be taken on the server instance to pull down the project from the remote repository. You can use these steps with the [MERN stack starter](https://github.com/rfdickerson/mern-example) project to have a working example of how pulling down a repository works.
+
+1. Ensure that Git is installed using the following command:
+
+ ```command
+ sudo apt install git
+ ```
+
+1. Change into a directory where you want the project to live. Here, the current user's home directory is used.
+
+ ```command
+ cd ~
+ ```
+
+1. Clone the remote GitHub repository. As above, replace the URL here with the actual URL for your repository.
+
+ ```command
+ git clone https://github.com/rfdickerson/mern-example.git
+ ```
+
+## Install Your Application's Dependencies
+
+1. Now that the files are on your server instance, you need to reinstall the project's Node.js modules. To do so, change into the project directory, and execute one of the commands below.
+
+ - If you used NPM to install modules, use the following command:
+
+ ```command
+ npm install
+ ```
+
+ - If you used Yarn to install modules, use the following command:
+
+ ```command
+ yarn
+ ```
+
+ You can tell which one your project uses by searching its base directory. If you find a `yarn.lock` file, it should be a Yarn project. Otherwise, it should be an NPM project.
+
+ You may need to run the above commands in multiple directories within your project. This depends again on whether you set up Express JS and React as two separate NPM/Yarn projects.
+
+1. Depending on your Node.js and React versions, you may need to enable the legacy OpenSSL provider in Node.js. If you get an OpenSSL error when trying to run React, use the following command:
+
+ ```command
+ export NODE_OPTIONS=--openssl-legacy-provider
+ ```
+
+ To make this configuration persistent, add the line above to your `~/.bashrc` file.
+
+## Start Your Application
+
+1. Start the MongoDB service using the following command:
+
+ ```command
+ sudo systemctl start mongod
+ ```
+
+1. Change into the project's `server` directory, and start up the Express JS and React servers. The commands for this vary depending on your project configuration.
+
+ Typically, you can run an NPM project with a command like the following:
+
+ ```command
+ npm start
+ ```
+
+ Or, if your project uses an NPM script, you might run it with something like this, replacing `mern-project` with the name of the script:
+
+ ```command
+ npm run mern-project
+ ```
+
+ For the [MERN stack starter](https://github.com/rfdickerson/mern-example) project referenced as an example above, use the following command:
+
+ ```command
+ yarn start-dev
+ ```
+
+You can then visit your application in a browser. By default, React runs on `localhost:3000`, and that is the case for the example application referenced above. To access it remotely, you can use an SSH tunnel.
+
+- On **Windows**, use the PuTTY tool to set up your SSH tunnel. Follow the appropriate section of the [Setting up an SSH Tunnel with Your Linode for Safe Browsing](/docs/guides/setting-up-an-ssh-tunnel-with-your-linode-for-safe-browsing/#windows) guide, replacing the example port number there with **3000**.
+
+- On **macOS** or **Linux**, use the following command to set up the SSH tunnel. Replace `example-user` with your username on the application server and `192.0.2.0` with the server's IP address.
+
+ ```command
+ ssh -L3000:localhost:3000 example-user@192.0.2.0
+ ```
+
+
\ No newline at end of file
diff --git a/docs/guides/development/javascript/deploy-a-mern-stack-application/mern-app-example.png b/docs/guides/development/javascript/deploy-a-mern-stack-application/mern-app-example.png
new file mode 100644
index 00000000000..4ad34f55d5b
Binary files /dev/null and b/docs/guides/development/javascript/deploy-a-mern-stack-application/mern-app-example.png differ
diff --git a/docs/guides/development/javascript/how-to-create-a-mern-stack-application/index.md b/docs/guides/development/javascript/how-to-create-a-mern-stack-application/index.md
deleted file mode 100644
index 95e606cf591..00000000000
--- a/docs/guides/development/javascript/how-to-create-a-mern-stack-application/index.md
+++ /dev/null
@@ -1,278 +0,0 @@
----
-slug: how-to-create-a-mern-stack-application
-title: "Create a MERN Stack Application"
-title_meta: "How to Create a MERN Stack on Linux"
-description: "Learn how to create a MERN stack application on Linux. Read our guide to learn MERN stack basics. ✓ Click here!"
-authors: ["Cameron Laird"]
-contributors: ["Cameron Laird"]
-published: 2022-09-12
-modified: 2022-09-23
-keywords: ['MERN Stack Application','How to create a MERN stack application','MERN stack','MERN stack application', 'learn Linux filesystem', 'MERN stack on Linux']
-license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
-external_resources:
-- '[How to Use MERN Stack: A Complete Guide](https://www.mongodb.com/languages/mern-stack-tutorial)'
-- '[The MERN stack: A complete tutorial](https://blog.logrocket.com/mern-stack-tutorial/)'
-- '[Learn the MERN Stack - Full Tutorial for Beginners (MongoDB, Express, React, Node.js) in 12Hrs (2021)](https://www.youtube.com/watch?v=7CqJlxBYj-M)'
-- '[Learn the MERN Stack - Full Tutorial (MongoDB, Express, React, Node.js)](https://www.youtube.com/watch?v=7CqJlxBYj-M)'
----
-
-Of all the possible technical bases for a modern web site, ["MERN holds the leading position when it comes to popularity."](https://www.gkmit.co/blog/web-app-development/mean-vs-mern-stack-who-will-win-the-war-in-2021) This introduction makes you familiar with the essential tools used for a plurality of all web sites worldwide.
-
-## Before You Begin
-
-1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides.
-
-1. Follow our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
-
-{{< note >}}
-The steps in this guide require root privileges. Be sure to run the steps below as `root` or with the `sudo` prefix. For more information on privileges, see our [Users and Groups](/docs/guides/linux-users-and-groups/) guide.
-{{< /note >}}
-
-## What is the MERN stack?
-
-MERN refers to MongoDB, Express.js, ReactJS, and Node.js, four software tools which cooperate to power millions of web sites worldwide. In broad terms:
-
-* [**MongoDB**](/docs/guides/databases/mongodb/) manages data, such as customer information, technical measurements, and event records.
-* [**Express.js**](/docs/guides/express-js-tutorial/) is a web application framework for the "behaviors" of particular applications. For example, how data flows from catalog to shopping cart.
-* [**ReactJS**](/docs/guides/development/react/) is a library of user-interface components for managing the visual "state" of a web application.
-* [**Node.js**](/docs/guides/development/nodejs/) is a back-end runtime environment for the server side of a web application.
-
-Linode has [many articles](/docs/guides/) on each of these topics, and supports thousands of [Linode customers who have created successful applications](https://www.linode.com/content-type/spotlights/) based on these tools.
-
-One of MERN’s important distinctions is the [JavaScript programming language is used throughout](https://javascript.plainenglish.io/why-mern-stack-is-becoming-popular-lets-see-in-detail-8825fd3fd5ee) the entire stack. Certain competing stacks use PHP or Python on the back end, JavaScript on the front end, and perhaps SQL for data storage. MERN developers focus on just a single programming language, [JavaScript, with all the economies](https://javascript.plainenglish.io/should-you-use-javascript-for-everything-f98015ade40a) that implies, for training and tooling.
-
-## Install the MERN stack
-
-You can install a basic MERN stack on a 64-bit x86_64 [Linode Ubuntu 20.04 host](https://www.linode.com/distributions/) in under half an hour. As of this writing, parts of MERN for Ubuntu 22.04 remain experimental. While thousands of variations are possible, this section typifies a correct "on-boarding" sequence. The emphasis here is on "correct", as scores of already-published tutorials embed enough subtle errors to block their use by readers starting from scratch.
-
-### Install MongoDB
-
-1. Update the repository cache:
-
- apt update -y
-
-2. Install the networking and service dependencies Mongo requires:
-
- apt install ca-certificates curl gnupg2 systemctl wget -y
-
-3. Configure access to the official MongoDB Community Edition repository with the MongoDB public GPG key:
-
- wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | apt-key add -
-
-4. Create a MongoDB list file:
-
- echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-5.0.list
-
-5. Update the repository cache again:
-
- apt update -y
-
-6. Install MongoDB itself:
-
- apt install mongodb-org -y
-
-7. Enable and the MongoDB service:
-
- systemctl enable mongod
-
-8. Launch the MongoDB service:
-
- systemctl start mongod
-
-9. Verify the MongoDB service:
-
- systemctl status mongod
-
- You should see diagnostic information that concludes:
-
- {{< output >}}… Started MongoDB Database Server.{{< /output >}}
-
-0. For an even stronger confirmation that the Mongo server is ready for useful action, connect directly to it and issue this command:
-
- mongo
-
-1. Now issue this command:
-
- db.runCommand({ connectionStatus: 1 })
-
- You should see, along with many other details, this summary of the connectionStatus:
-
- {{< output >}}… MongoDB server … "ok" : 1 …{{< /output >}}
-
-2. Exit Mongo:
-
- exit
-
-### Install Node.js
-
-While the acronym is MERN, the true order of its dependencies is better written as "MNRE". ReactJS and Express.js conventionally require Node.js, so the next installation steps focus on Node.js. As with MongoDB, Node.js's main trusted repository is not available in the main Ubuntu repository.
-
-1. Run this command to adjoin it:
-
- curl -sL https://deb.nodesource.com/setup_16.x | bash -
-
-2. Install Node.js itself:
-
- apt-get install nodejs -y
-
-3. Verify the installation:
-
- node -v
-
- You should see `v16.15.1` or perhaps later.
-
-### Install React.js
-
-1. Next, install React.js:
-
- mkdir demonstration; cd demonstration
- npx --yes create-react-app frontend
- cd frontend
- npm run build
-
-Templates for all the HTML, CSS, and JS for your model application are now present in the demonstration/frontend directory.
-
-### Install Express.js
-
-1. Express.js is the final component of the basic MERN stack.
-
- cd ..; mkdir server; cd server
- npm init -y
- cd ..
- npm install cors express mongodb mongoose nodemon
-
-## Use the MERN stack to create an example application
-
-The essence of a web application is to respond to a request from a web browser with an appropriate result, backed by a datastore that "remembers" crucial information from one session to the next. Any realistic full-scale application involves account management, database backup, context dependence, and other refinements. Rather than risk the distraction and loss of focus these details introduce, this section illustrates the simplest possible use of MERN to implement a [three-tier operation](https://www.ibm.com/cloud/learn/three-tier-architecture) typical of real-world applications.
-
-"Three-tier" in this context refers to the teamwork web applications embody between:
-
-* The presentation in the web browser of the state of an application
-* The "back end" of the application which realizes that state
-* The datastore which supports the back end beyond a single session of the front end or even the restart of the back end.
-
-You can create a tiny application which receives a request from a web browser, creates a database record based on that request, and responds to the request. The record is visible within the Mongo datastore.
-
-### Initial configuration of the MERN application
-
-1. Create `demonstration/server/index.js` with this content:
-
- {{< file "demonstration/server/index.js" javascript >}}
-const express = require('express');
-const bodyParser = require('body-parser');
-const mongoose = require('mongoose');
-const routes = require('../routes/api');
-const app = express();
-const port = 4200;
-
-// Connect to the database
-mongoose
- .connect('mongodb://127.0.0.1:27017/', { useNewUrlParser: true })
- .then(() => console.log(`Database connected successfully`))
- .catch((err) => console.log(err));
-
-// Override mongoose's deprecated Promise with Node's Promise.
-mongoose.Promise = global.Promise;
-app.use((req, res, next) => {
- res.header('Access-Control-Allow-Origin', '*');
- res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
- next();
- });
- app.use(bodyParser.json());
- app.use('/api', routes);
- app.use((err, req, res, next) => {
- console.log(err);
- next();
- });
-
- app.listen(port, () => {
- console.log(`Server runs on port ${port}.`);
- });
-{{ file >}}
-
-2. Create `demonstration/routes/api.js` with this content:
-
- {{< file "demonstration/routes/api.js" javascript >}}
-const express = require('express');
-const router = express.Router();
-
-var MongoClient = require('mongodb').MongoClient;
-var url = 'mongodb://127.0.0.1:27017/';
-const mongoose = require('mongoose');
-var db = mongoose.connection;
-
-router.get('/record', (req, res, next) => {
- item = req.query.item;
- MongoClient.connect(url, function(err, db) {
- if (err) throw err;
- var dbo = db.db("mydb");
- var myobj = { name: item };
- dbo.collection("demonstration").insertOne(myobj, function(err, res) {
- if (err) throw err;
- console.log(`One item (${item}) inserted.`);
- db.close();
- })
- });
-})
-module.exports = router;
-{{ file >}}
-
-3. Create `demonstration/server/server.js` with this content:
-
- {{< file "demonstration/server/server.js" javascript >}}
-const express = require("express");
-const app = express();
-const cors = require("cors");
-require("dotenv").config({ path: "./config.env" });
-const port = process.env.PORT || 4200;
-app.use(cors());
-app.use(express.json());
-app.use(require("./routes/record"));
-const dbo = require("./db/conn");
-
-app.listen(port, () => {
- // Connect on start.
- dbo.connectToServer(function (err) {
- if (err) console.error(err);
- });
- console.log(`Server is running on port: ${port}`);
-});
-{{ file >}}
-
-### Verify your application
-
-1. Launch the application server:
-
- node server/index.js
-
-2. In a convenient Web browser, request:
-
- localhost:4200/api/record?item=this-new-item
-
- At this point, your terminal should display:
-
- {{< output >}}One item (this-new-item) inserted.{{< /output >}}
-
-3. Now launch an interactive shell to connect to the MongoDB datastore:
-
- mongo
-
-4. Within the MongoDB shell, request:
-
- use mydb
- db.demonstration.find({})
-
- Mongo should report that it finds a record:
-
- {{< output >}}{ "_id" : ObjectId("62c84fe504d6ca2aa325c36b"), "name" : "this-new-item" }{{< /output >}}
-
-This demonstrates a minimal MERN action:
-* The web browser issues a request with particular data.
-* The React front end framework routes that request.
-* The Express application server receives the data from the request, and acts on the MongoDB datastore.
-
-## Conclusion
-
-You now know how to install each of the basic components of the MERN stack on a standard Ubuntu 20.04 server, and team them together to demonstrate a possible MERN action: creation of one database record based on a browser request.
-
-Any real-world application involves considerably more configuration and source files. MERN enjoys abundant tooling to make the database and web connections more secure, to validate data systematically, to structure a [complete Application Programming Interface](https://www.mongodb.com/blog/post/the-modern-application-stack-part-3-building-a-rest-api-using-expressjs) (API), and to simplify debugging. Nearly all practical applications need to create records, update, delete, and list them. All these other refinements and extensions use the elements already present in the workflow above. You can build everything your full application needs from this starting point.
\ No newline at end of file
diff --git a/docs/guides/development/javascript/install-the-mern-stack/index.md b/docs/guides/development/javascript/install-the-mern-stack/index.md
new file mode 100644
index 00000000000..6435be4fa72
--- /dev/null
+++ b/docs/guides/development/javascript/install-the-mern-stack/index.md
@@ -0,0 +1,342 @@
+---
+slug: install-the-mern-stack
+title: "Install the MERN Stack and Create an Example Application"
+description: "Learn how to create a MERN stack application on Linux. Read our guide to learn MERN stack basics."
+authors: ["Cameron Laird", "Nathaniel Stickman"]
+contributors: ["Cameron Laird", "Nathaniel Stickman"]
+published: 2022-09-12
+modified: 2024-05-06
+keywords: ['MERN Stack Application','How to create a MERN stack application','MERN stack','MERN stack application', 'learn Linux filesystem', 'MERN stack on Linux']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[How to Use MERN Stack: A Complete Guide](https://www.mongodb.com/languages/mern-stack-tutorial)'
+- '[The MERN stack: A complete tutorial](https://blog.logrocket.com/mern-stack-tutorial/)'
+- '[Learn the MERN Stack - Full Tutorial for Beginners (MongoDB, Express, React, NodeJS) in 12Hrs (2021)](https://www.youtube.com/watch?v=7CqJlxBYj-M)'
+- '[Learn the MERN Stack - Full Tutorial (MongoDB, Express, React, Node.js)](https://www.youtube.com/watch?v=7CqJlxBYj-M)'
+aliases: ['/guides/how-to-create-a-mern-stack-application/']
+---
+
+Of all the possible technical bases for a modern website, ["MERN holds the leading position when it comes to popularity."](https://www.gkmit.co/blog/web-app-development/mean-vs-mern-stack-who-will-win-the-war-in-2021) This introduction makes you familiar with the essential tools used for a plurality of all websites worldwide.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+{{< note >}}
+The steps in this guide require root privileges. Be sure to run the steps below as `root` or with the `sudo` prefix. For more information on privileges, see our [Linux Users and Groups](/docs/guides/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## What Is the MERN Stack?
+
+MERN refers to MongoDB, Express.js, ReactJS, and Node.js, four software tools that cooperate to power millions of websites worldwide. In broad terms:
+
+- [**MongoDB**](/docs/guides/databases/mongodb/) manages data, such as customer information, technical measurements, and event records.
+- [**Express.js**](/docs/guides/express-js-tutorial/) is a web application framework for the "behaviors" of particular applications. For example, how data flows from catalog to shopping cart.
+- [**ReactJS**](/docs/guides/development/react/) is a library of user-interface components for managing the visual "state" of a web application.
+- [**Node.js**](/docs/guides/development/nodejs/) is a back-end runtime environment for the server side of a web application.
+
+Linode has [many articles](/docs/guides/) on each of these topics and supports thousands of [Linode customers who have created successful applications](https://www.linode.com/content-type/spotlights/) based on these tools.
+
+One of MERN’s important distinctions is the [JavaScript programming language is used throughout](https://javascript.plainenglish.io/why-mern-stack-is-becoming-popular-lets-see-in-detail-8825fd3fd5ee) the entire stack. Certain competing stacks use PHP or Python on the back end, JavaScript on the front end, and perhaps SQL for data storage. MERN developers focus on just a single programming language, [JavaScript, with all the economies](https://javascript.plainenglish.io/should-you-use-javascript-for-everything-f98015ade40a) that implies, for training and tooling.
+
+## Install the MERN stack
+
+You can install a basic MERN stack on a 64-bit x86_64 [Linode Ubuntu 20.04 host](https://www.linode.com/distributions/) in under half an hour. As of this writing, parts of MERN for Ubuntu 22.04 remain experimental. While thousands of variations are possible, this section typifies a correct "on-boarding" sequence. The emphasis here is on "correct", as scores of already-published tutorials embed enough subtle errors to block their use by readers starting from scratch.
+
+### Install MongoDB
+
+1. Update the repository cache using the following command:
+
+ ```command
+ apt update -y
+ ```
+
+1. Install the networking and service dependencies Mongo requires using the following command:
+
+ ```command
+ apt install ca-certificates curl gnupg2 systemctl wget -y
+ ```
+
+1. Import the GPG key for MongoDB.
+
+ ```command
+ wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
+ ```
+
+1. Add the MongoDB package list to APT.
+
+ {{< tabs >}}
+ {{< tab "Debian 10 (Buster)" >}}
+ ```command
+ echo "deb http://repo.mongodb.org/apt/debian buster/mongodb-org/5.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
+ ```
+ {{< /tab >}}
+ {{< tab "Ubuntu 20.04 (Focal)" >}}
+ ```command
+ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
+ ```
+ {{< /tab >}}
+ {{< /tabs >}}
+
+1. Update the APT package index using the following command:
+
+ ```command
+ sudo apt update
+ ```
+
+1. Install MongoDB using the following command:
+
+ ```command
+ sudo apt install mongodb-org
+ ```
+
+See the official documentation for more on installing MongoDB [on Debian](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/) and [on Ubuntu](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/). You can also refer to our guide [How To Install MongoDB on Ubuntu 16.04](/docs/guides/install-mongodb-on-ubuntu-16-04/).
+
+#### Start MongoDB and Verify the Installation
+
+Once MongoDB has been installed, enable and start the service. You can optionally test MongoDB to verify that it has been installed correctly.
+
+1. Enable the MongoDB service using the following command:
+
+ ```command
+ systemctl enable mongod
+ ```
+
+1. Launch the MongoDB service using the following command:
+
+ ```command
+ systemctl start mongod
+ ```
+
+1. Verify the MongoDB service using the following command:
+
+ ```command
+ systemctl status mongod
+ ```
+
+ You should see diagnostic information that concludes the MongoDB database server has started.
+
+ ```output
+ … Started MongoDB Database Server.
+ ```
+
+1. For an even stronger confirmation that the Mongo server is ready for useful action, connect directly to it and issue the following command:
+
+ ```command
+ mongo
+ ```
+
+1. Now issue the following command:
+
+ ```command
+ db.runCommand({ connectionStatus: 1 })
+ ```
+
+ You should see, along with many other details, the following summary of the connection status:
+
+ ```output
+ … MongoDB server … "ok" : 1 …
+ ```
+
+1. Exit Mongo using the following command:
+
+ ```command
+ exit
+ ```
+
+### Install Node.js
+
+While the acronym is MERN, the true order of its dependencies is better written as "MNRE". ReactJS and Express.js conventionally require Node.js, so the next installation steps focus on Node.js. As with MongoDB, Node.js's main trusted repository is not available in the main Ubuntu repository.
+
+1. Install the Node Version Manager, the preferred method for installing Node.js using the following command:
+
+ ```command
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
+ ```
+
+1. Restart your shell session (logging out and logging back in), or run the following command:
+
+ ```command
+ export NVM_DIR="$HOME/.nvm"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
+ ```
+
+1. Install the current version of Node.js using the following command:
+
+ ```command
+ nvm install node
+ ```
+
+1. If you are deploying an existing project that uses the Yarn package manager instead of NPM, you need to install Yarn as well. You can do so with the following command:
+
+ ```command
+ npm install -g yarn
+ ```
+
+You can additionally refer to our [How to Install and Use the Node Package Manager (NPM) on Linux](/docs/guides/install-and-use-npm-on-linux/#how-to-install-or-update-npm) guide. If you are interested in using Yarn instead of NPM, take a look at our [How to Install and Use the Yarn Package Manager](/docs/guides/install-and-use-the-yarn-package-manager/) guide.
+
+### Install React.js
+
+Install React.js using the following commands:
+
+```command
+mkdir demonstration; cd demonstration
+npx --yes create-react-app frontend
+cd frontend
+npm run build
+```
+
+Templates for all the HTML, CSS, and JS for your model application are now present in the `demonstration/frontend` directory.
+
+### Install Express.js
+
+Express.js is the final component of the basic MERN stack. Install it using the following commands:
+
+```command
+cd ..; mkdir server; cd server
+npm init -y
+cd ..
+npm install cors express mongodb mongoose nodemon
+```
+## Use the MERN Stack to Create an Example Application
+
+The essence of a web application is to respond to a request from a web browser with an appropriate result, backed by a datastore that "remembers" crucial information from one session to the next. Any realistic full-scale application involves account management, database backup, context dependence, and other refinements. Rather than risk the distraction and loss of focus these details introduce, this section illustrates the simplest possible use of MERN to implement a [three-tier operation](https://www.ibm.com/cloud/learn/three-tier-architecture) typical of real-world applications.
+
+"Three-tier" in this context refers to the teamwork web applications embody between:
+
+- The presentation in the web browser of the state of an application
+- The "back end" of the application which realizes that state
+- The datastore which supports the back end beyond a single session of the front end or even the restart of the back end.
+
+You can create a tiny application that receives a request from a web browser, creates a database record based on that request, and responds to the request. The record is visible within the Mongo datastore.
+
+### Initial Configuration of the MERN Application
+
+1. Create `demonstration/server/index.js` with the following content:
+
+ ```file {title="demonstration/server/index.js" lang="javascript"}
+ const express = require('express');
+ const bodyParser = require('body-parser');
+ const mongoose = require('mongoose');
+ const routes = require('../routes/api');
+ const app = express();
+ const port = 4200;
+
+ // Connect to the database
+ mongoose
+ .connect('mongodb://127.0.0.1:27017/', { useNewUrlParser: true })
+ .then(() => console.log(`Database connected successfully`))
+ .catch((err) => console.log(err));
+
+ // Override mongoose's deprecated Promise with Node's Promise.
+ mongoose.Promise = global.Promise;
+ app.use((req, res, next) => {
+ res.header('Access-Control-Allow-Origin', '*');
+ res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
+ next();
+ });
+ app.use(bodyParser.json());
+ app.use('/api', routes);
+ app.use((err, req, res, next) => {
+ console.log(err);
+ next();
+ });
+
+ app.listen(port, () => {
+ console.log(`Server runs on port ${port}.`);
+ });
+ ```
+
+1. Create `demonstration/routes/api.js` with the following content:
+
+ ```file {title="demonstration/routes/api.js" lang="javascript"}
+ const express = require('express');
+ const router = express.Router();
+
+ var MongoClient = require('mongodb').MongoClient;
+ var url = 'mongodb://127.0.0.1:27017/';
+ const mongoose = require('mongoose');
+ var db = mongoose.connection;
+
+ router.get('/record', (req, res, next) => {
+ item = req.query.item;
+ MongoClient.connect(url, function(err, db) {
+ if (err) throw err;
+ var dbo = db.db("mydb");
+ var myobj = { name: item };
+ dbo.collection("demonstration").insertOne(myobj, function(err, res) {
+ if (err) throw err;
+ console.log(`One item (${item}) inserted.`);
+ db.close();
+ })
+ });
+ })
+ module.exports = router;
+ ```
+
+1. Create `demonstration/server/server.js` with the following content:
+
+ ```file {title="demonstration/server/server.js" lang="javascript"}
+ const express = require("express");
+ const app = express();
+ const cors = require("cors");
+ require("dotenv").config({ path: "./config.env" });
+ const port = process.env.PORT || 4200;
+ app.use(cors());
+ app.use(express.json());
+ app.use(require("./routes/record"));
+ const dbo = require("./db/conn");
+
+ app.listen(port, () => {
+ // Connect on start.
+ dbo.connectToServer(function (err) {
+ if (err) console.error(err);
+ });
+ console.log(`Server is running on port: ${port}`);
+ });
+ ```
+
+### Verify Your Application
+
+1. Launch the application server using the following command:
+
+ ```command
+ node server/index.js
+ ```
+
+1. In a convenient Web browser, request the URL, ``localhost:4200/api/record?item=this-new-item``
+
+ At this point, your terminal should display the following output:
+
+ ```output
+ One item (this-new-item) inserted.
+ ```
+
+1. Now launch an interactive shell to connect to the MongoDB datastore using the following command:
+
+ ```command
+ mongo
+ ```
+
+1. Within the MongoDB shell, request:
+
+ use mydb
+ db.demonstration.find({})
+
+ Mongo should report that it finds a record:
+
+ {{< output >}}{ "_id" : ObjectId("62c84fe504d6ca2aa325c36b"), "name" : "this-new-item" }{{< /output >}}
+
+This demonstrates a minimal MERN action:
+- The web browser issues a request with particular data.
+- The React frontend framework routes that request.
+- The Express application server receives the data from the request, and acts on the MongoDB datastore.
+
+## Conclusion
+
+You now know how to install each of the basic components of the MERN stack on a standard Ubuntu 20.04 server, and team them together to demonstrate a possible MERN action: creation of one database record based on a browser request.
+
+Any real-world application involves considerably more configuration and source files. MERN enjoys abundant tooling to make the database and web connections more secure, to validate data systematically, to structure a [complete Application Programming Interface](https://www.mongodb.com/blog/post/the-modern-application-stack-part-3-building-a-rest-api-using-expressjs) (API), and to simplify debugging. Nearly all practical applications need to create records, update, delete, and list them. All these other refinements and extensions use the elements already present in the workflow above. You can build everything your full application needs from this starting point.
\ No newline at end of file
diff --git a/docs/guides/development/perl/manage-cpan-modules-with-cpan-minus/index.md b/docs/guides/development/perl/manage-cpan-modules-with-cpan-minus/index.md
index fe689b86a55..e0a5b20bcb3 100644
--- a/docs/guides/development/perl/manage-cpan-modules-with-cpan-minus/index.md
+++ b/docs/guides/development/perl/manage-cpan-modules-with-cpan-minus/index.md
@@ -18,7 +18,7 @@ languages: ["perl"]
tags: ["perl"]
---
-
+
CPAN, the Comprehensive Perl Archive Network, is the primary source for publishing and fetching the latest modules and libraries for the Perl programming language. The default method for installing Perl modules, using the **CPAN Shell**, provides users with a great deal of power and flexibility, but this comes at the cost of a complex configuration and an inelegant default setup.
diff --git a/docs/guides/development/python/monitor-filesystem-events-with-pyinotify/index.md b/docs/guides/development/python/monitor-filesystem-events-with-pyinotify/index.md
index 5ad50498879..1f7b8a05245 100644
--- a/docs/guides/development/python/monitor-filesystem-events-with-pyinotify/index.md
+++ b/docs/guides/development/python/monitor-filesystem-events-with-pyinotify/index.md
@@ -18,7 +18,7 @@ aliases: ['/development/monitor-filesystem-events-with-pyinotify/','/development
tags: ["python"]
---
-
+
File system monitoring through `inotify` can be interfaced through Python using `pyinotify`. This guide will demonstrate how to use a Python script to monitor a directory then explore practical uses by incorporating async modules or running additional threads.
diff --git a/docs/guides/development/version-control/resolving-git-merge-conflicts/index.md b/docs/guides/development/version-control/resolving-git-merge-conflicts/index.md
index 760243d4466..c3a74a2984d 100644
--- a/docs/guides/development/version-control/resolving-git-merge-conflicts/index.md
+++ b/docs/guides/development/version-control/resolving-git-merge-conflicts/index.md
@@ -252,9 +252,9 @@ The chapter on [Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advan
The first command configures Git to use VS Code as your default merge tool. The second command tells Git how to run VS Code, since Git is not aware of VS Code unless configured to use it.
-The `--wait` option is specific to VSCode, and tells it to wait until you explicitly exit rather than moving to the background.
+The `--wait` option is specific to VS Code, and tells it to wait until you explicitly exit rather than moving to the background.
-VSCode gives you three different ways of viewing a merge conflict:
+VS Code gives you three different ways of viewing a merge conflict:

diff --git a/docs/guides/development/version-control/speed-up-your-development-process-with-turborepo/index.md b/docs/guides/development/version-control/speed-up-your-development-process-with-turborepo/index.md
new file mode 100644
index 00000000000..23da433db9b
--- /dev/null
+++ b/docs/guides/development/version-control/speed-up-your-development-process-with-turborepo/index.md
@@ -0,0 +1,273 @@
+---
+slug: speed-up-your-development-process-with-turborepo
+title: "Speed up Your Development Process with Turborepo"
+description: "Learn about Turborepo, the high-performance build system for JavaScript and TypeScript. Discover how it can help speed up your development process."
+authors: ["John Mueller"]
+contributors: ["John Mueller"]
+published: 2023-06-27
+modified: 2024-05-02
+keywords: ['turborepo speeds up development process','monorepo','multirepo','remote scaling','polyrepo']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[npm Docs: npm-prune](https://docs.npmjs.com/cli/v8/commands/npm-prune)'
+- '[Turbo Repo Docs](https://turbo.build/repo/docs)'
+---
+
+A *monorepo* is a powerful method of using a single version-controlled repository to interact with multiple distinct projects that have well-defined associations. In most cases, these projects are logically independent and managed by different teams. For example, Google, Microsoft, Facebook, and Twitter are companies that use immense code repositories, in the terabyte range, to manage their projects. Turborepo is a product that makes it easier to implement a monorepo when working with JavaScript and TypeScript.
+
+## Working with a Monorepo
+
+Monorepos are currently one of the most popular tools available for managing multiple projects under a single umbrella. This means that a code change is reflected in every project that uses that code, rather than having to be replicated. Web developers especially like using monorepos because they usually have to manage a large number of projects.
+
+### What is a monorepo?
+
+A monorepo, sometimes called a monolithic repository (not to be confused with monolithic architecture), is the opposite of a multirepo. A *multirepo* reflects a method of placing each project in its own repository. A monorepo reflects a coordinated effort where code only appears once in a repository, but can be used by everyone.
+
+Moving from a multirepo to a monorepo can be difficult. It requires code consolidation, followed by refactoring, to ensure all of the code points to the right place. The results are worth the effort in most cases because a monorepo provides these, and other, benefits:
+
+- Everyone can see everyone else’s code. This makes it possible for a member of one team to fix another team’s code before they even know there's a problem.
+- Sharing dependencies becomes trivial, reducing the need for an advanced package manager.
+- The number of versioning conflicts is reduced because there is a single "source of truth".
+- The code itself is far more consistent, which reduces the time required to understand what it does.
+- All of the teams using the repository can coordinate their efforts, creating a single timeline for updates.
+
+There are times when a monorepo works well. You want to use a monorepo under the following conditions:
+
+- The projects have a lot of scripts that are dependent on each other. This allows a single change to affect all of the projects requiring that change. However, this feature can also backfire because a broken main/master affects everyone’s projects, not just one.
+- It’s possible to execute tasks in parallel so that the build process can proceed in an efficient manner. A monorepo can experience performance issues when some commands take too long to execute; parallel execution partly overcomes this issue.
+- The projects can support incremental builds, so that only the files with changes are rebuilt.
+- There is a strong data management process in place because monorepos can quickly become immense.
+- All of the projects support a uniform linting configuration to look for patterns that cause problems in the source code.
+- Caching the build steps doesn’t cause problems, which means using remote caching instead of local caching.
+
+### Comparing a Monorepo to a Multirepo
+
+A *multirepo* is also called a *polyrepo*, so you may encounter both terms in your development journey. No matter what you call it, both terms refer to using multiple code repositories to manage projects. When choosing between a monorepo and a multirepo consider that the multirepo generally has a reduced learning curve.
+
+There are two other major issues to consider when working with a monorepo instead of a multirepo. The first is ownership. Sometimes you need to set permissions to ensure that code is only modified by authorized people. For example, when working with code that is affected by legal considerations. The second is code reviews. This process can become chaotic when working with a monorepo, and development teams may get bogged down with notifications.
+
+## Understand the Turborepo Advantage
+
+The advantages of a monorepo usually outweigh the disadvantages for certain types of projects. This is why larger organizations choose to use the monorepo approach. However, you can create a monorepo from scratch using a tool like NPM, PNPM, or Yarn. Unfortunately, these tools don’t scale well, but Turborepo helps overcome such issues. The following sections provide insights into why Turborepo may be the optimal solution for an organization.
+
+### Allow Your Monorepo to Scale
+
+The problem with a monorepo is that it doesn’t scale well in many situations. This is because each workspace has its own testing, linting, and build process. This means that a monorepo could end up executing hundreds of tasks during each deployment and integration. Turborepo solves this problem by supporting remote caching, so that the Continuous Integration (CI) process never performs the same work twice.
+
+### Keep Things Moving with Task Scheduling
+
+In this case, there are two levels of interaction with Turborepo. First, it ensures that each task occurs in the right order, and at the right time. Trying to keep track of all the various projects in a monorepo can prove difficult, time consuming, and error-prone. Efficiently performing tasks in the right order can be hard. Second, Turborepo can bypass time-consuming tasks by using parallel processing. When working with a monorepo in a manually configured environment, many organizations perform one task at a time. This means that resources go unused, leading to inefficiencies.
+
+### Get Rid of Overgrowth with Pruning
+
+The problem with many containers like Docker is that a single change can cause a rebuild and redeployment of all the packages in an application. Turborepo works with the root lockfile to generate a pruned subset, with only the packages necessary to update a given target. This process ensures that packages are only rebuilt and deployed when necessary. [The `turbo prune --scope` command](https://turbo.build/repo/docs/reference/command-line-reference#turbo-prune---scopetarget) creates a sparse lockfile with only the elements that have changed and need updated. You can target specific packages to determine if and when they need rebuilding and redeployment.
+
+### Include Support for Multirepo
+
+In most environments, you must choose between a monorepo and a multirepo. It's too complex to maintain a mixed environment in order to get benefits of both. However, Turborepo can support a mixed environment if necessary. In this case, the main contribution from Turborepo is the caching, which reduces the amount of work needed to keep everything in sync. Of course, you need a really good business case for maintaining a mixed environment because it’s still a lot of work. One situation that may require a mixed environment is if you have projects that must keep data safe in a particular way. For example, projects that support the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requirements. A project of this sort needs some of the cached code, but could also contain code that you must maintain in a separate repository.
+
+### What Turborepo Doesn't Do
+
+Turborepo doesn’t install packages. This final piece of the puzzle is left to tools like NPM, PNPM, or Yarn. What Turborepo does is ensure that the package installers work efficiently by limiting them strictly to what they do best, install packages.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+1. Follow the instructions in our guide [Installing and Using NVM (Node Version Manager)](/docs/guides/how-to-install-use-node-version-manager-nvm/) to install NVM, Node.js, and NPM.
+
+1. You should also be familiar with Git, and have access to a remote repository on GitHub, GitLab, Bitbucket, or other compatible platform. See our [Getting Started with Git](/docs/guides/how-to-configure-git/) guide to learn more about Git.
+
+{{< note >}}
+This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+## Get Your Own Copy of Turborepo
+
+If you followed the prerequisites above, you should have a Linode compute instance with NVM, Node.js, and NPM installed. You can now install Turborepo. Use `npm` to install it globally, which allows use of Turborepo on any project:
+
+```command
+npm install turbo --global
+```
+
+You should see a few messages telling you about the installation progress.
+
+## Develop a Basic TypeScript Example
+
+Having NPM and Turborepo installed means you can create a small test application. The following steps tell you how.
+
+1. Create a directory for the test application repository and change into it
+
+ ```command
+ mkdir testApp
+ cd testApp
+ ```
+
+1. Create a remote repository:
+
+ ```command
+ git init
+ ```
+
+1. Enter the following commands to add a readme file to the repository:
+
+ ```command
+ echo "# Test Application" >> README.md
+ git config --global user.email "you@example.com"
+ git config --global user.name "Your Name"
+ git add . && git commit -m "Initial commit"
+ ```
+
+1. Change the remote repository location, replacing `` with a working remote git address, such as `https://github.com/example-username>/example-repository.git`:
+
+ ```command
+ git remote add origin
+ ```
+
+1. Create the Git master branch:
+
+ ```command
+ git push -u origin master
+ ```
+
+1. Initialize the project:
+
+ ```command
+ npm init -y
+ ```
+
+ This step creates a `package.json` file that is echoed on the display:
+
+ ```output
+ Wrote to /home/example-user/testApp/package.json:
+
+ {
+ "name": "testapp",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1"
+ },
+ "repository": {
+ "type": "git",
+ "url": "git+https://github.com/example-user/turborepo.git"
+ },
+ "keywords": [],
+ "author": "",
+ "license": "ISC",
+ "bugs": {
+ "url": "https://github.com/example-user/turborepo/issues"
+ },
+ "homepage": "https://github.com/example-user/turborepo#readme"
+ }
+ ```
+
+1. Create a .gitignore file that describes which modules to ignore:
+
+ ```command
+ echo "node_modules" >> .gitignore
+ ```
+
+ In this case, the entry simply makes the process of creating the test application easier.
+
+1. Install TypeScript in a manner that allows a developer to use it, without it being installed as part of the application:
+
+ ```command
+ npm install --save-dev typescript
+ ```
+
+1. To compile the TypeScript application, you need to create a `tsconfig.json` file:
+
+ ```command
+ nano tsconfig.json
+ ```
+
+1. Enter the following code into the `tsconfig.json` file:
+
+ ```file {title="tsconfig.json" lang="typescript"}
+ {
+ "compilerOptions": {
+ "target": "es5",
+ "module": "commonjs",
+ "declaration": true,
+ "outDir": "./lib",
+ "strict": true
+ },
+ "include": ["src"],
+ "exclude": ["node_modules", "**/__tests__/*"]
+ }
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Create a source code directory, then access that directory:
+
+ ```command
+ mkdir src
+ cd src
+ ```
+
+1. Create an `index.ts` file:
+
+ ```command
+ nano index.ts
+ ```
+
+1. Give it the contents shown below:
+
+ ```file {title="index.ts"}
+ var message:string = "Hello World"
+ console.log(message)
+ ```
+
+1. When done, save the file and exit `nano` as above.
+
+1. Change back into the main `testApp` directory and open the `package.json` file created earlier::
+
+ ```command
+ cd ..
+ nano package.json
+ ```
+
+1. Modify the file to as highlighted below, paying particular attention to the addition of the comma at the end of line seven:
+
+ ```file {title="package.json" linenostart="6" hl_lines="2,3"}
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1",
+ "build" : "tsc"
+ },
+ ```
+
+1. When done, save the file and exit `nano`.
+
+1. Build the application:
+
+ ```command
+ npm run build
+ ```
+
+ The application should compile as expected:
+
+ ```output
+ > testapp@1.0.0 build
+ > tsc
+ ```
+
+1. View the JavaScript output created during the build process:
+
+ ```command
+ cat lib/index.js
+ ```
+
+ ```output
+ "use strict";
+ var message = "Hello World";
+ console.log(message);
+ ```
+
+## Conclusion
+
+Using a monorepo in place of a multirepo can save considerable time, money, and frustration. Especially when managing multiple projects that rely on common code and have long testing, linting, and build processes to consider. It helps to make things consistent and centralizes the efforts of everyone in an organization. There are also downsides, however, most notably scalability. Turborepo doesn’t try to replace tools like NPM, PNPM, or Yarn. Instead, it augments them and simplifies the techniques required to use them. Turborepo can provide a significant benefit to your organization, especially as the number and size of your projects grow.
\ No newline at end of file
diff --git a/docs/guides/email/postfix/pflogsumm-for-postfix-monitoring-on-centos-6/index.md b/docs/guides/email/postfix/pflogsumm-for-postfix-monitoring-on-centos-6/index.md
index c2e13ee3185..fe8869da618 100644
--- a/docs/guides/email/postfix/pflogsumm-for-postfix-monitoring-on-centos-6/index.md
+++ b/docs/guides/email/postfix/pflogsumm-for-postfix-monitoring-on-centos-6/index.md
@@ -13,7 +13,7 @@ external_resources:
- '[Pflogsumm](http://jimsun.linxnet.com/postfix_contrib.html)'
---
-
+
Pflogsumm is a simple Perl script that monitors your [Postfix](/docs/email/postfix/) mail server's activity. This guide will show you how to install Pflogsumm on CentOS 6 and configure it to send you a daily email with your mail server stats.
diff --git a/docs/guides/game-servers/install-a-half-life-2-deathmatch-dedicated-server-on-debian-or-ubuntu/index.md b/docs/guides/game-servers/install-a-half-life-2-deathmatch-dedicated-server-on-debian-or-ubuntu/index.md
index 14ed43e6bf0..1cb6dbc30ed 100644
--- a/docs/guides/game-servers/install-a-half-life-2-deathmatch-dedicated-server-on-debian-or-ubuntu/index.md
+++ b/docs/guides/game-servers/install-a-half-life-2-deathmatch-dedicated-server-on-debian-or-ubuntu/index.md
@@ -203,14 +203,14 @@ sv_password "MyLinode"
There are eight (8) official maps in Half-Life 2: Deathmatch. A preview of each map is available on [Combine OverWiki's official page](http://combineoverwiki.net/wiki/Half-Life_2:_Deathmatch#Maps):
-* dm_lockdown
-* dm_overwatch
-* dm_powerhouse
-* dm_resistance
-* dm_runoff
-* dm_steamlab
-* dm_underpass
-* halls3
+* `dm_lockdown`
+* `dm_overwatch`
+* `dm_powerhouse`
+* `dm_resistance`
+* `dm_runoff`
+* `dm_steamlab`
+* `dm_underpass`
+* `halls3`
Half-Life 2 Deathmatch requires that custom maps be in specific locations based on their type:
diff --git a/docs/guides/game-servers/install-black-mesa-on-debian-or-ubuntu/index.md b/docs/guides/game-servers/install-black-mesa-on-debian-or-ubuntu/index.md
index 5ac48b92b7e..ce4a98526f8 100644
--- a/docs/guides/game-servers/install-black-mesa-on-debian-or-ubuntu/index.md
+++ b/docs/guides/game-servers/install-black-mesa-on-debian-or-ubuntu/index.md
@@ -117,21 +117,21 @@ It's located at: `/home/steam/Steam/steamapps/common/Black Mesa Dedicated Server
### Maps
Currently, there are 10 official maps in Black Mesa Dedicated Server:
-* dm_bounce
-* dm_chopper
-* dm_crossfire
-* dm_gasworks
-* dm_lambdabunker
-* dm_power
-* dm_stack
-* dm_stalkyard
-* dm_subtransit
-* dm_undertow
+* `dm_bounce`
+* `dm_chopper`
+* `dm_crossfire`
+* `dm_gasworks`
+* `dm_lambdabunker`
+* `dm_power`
+* `dm_stack`
+* `dm_stalkyard`
+* `dm_subtransit`
+* `dm_undertow`
Three additional official maps are available in the Steam Workshop:
-* [dm_boom](http://steamcommunity.com/sharedfiles/filedetails/?id=432070352)
-* [dm_rail](http://steamcommunity.com/sharedfiles/filedetails/?id=432072942)
-* [dm_shipping](http://steamcommunity.com/sharedfiles/filedetails/?id=432074065)
+* [`dm_boom`](http://steamcommunity.com/sharedfiles/filedetails/?id=432070352)
+* [`dm_rail`](http://steamcommunity.com/sharedfiles/filedetails/?id=432072942)
+* [`dm_shipping`](http://steamcommunity.com/sharedfiles/filedetails/?id=432074065)
### Custom Maps
diff --git a/docs/guides/game-servers/install-dont-starve-together-game-server-on-ubuntu/index.md b/docs/guides/game-servers/install-dont-starve-together-game-server-on-ubuntu/index.md
index 15e4f97de24..2366d13caf2 100644
--- a/docs/guides/game-servers/install-dont-starve-together-game-server-on-ubuntu/index.md
+++ b/docs/guides/game-servers/install-dont-starve-together-game-server-on-ubuntu/index.md
@@ -1,19 +1,19 @@
---
slug: install-dont-starve-together-game-server-on-ubuntu
-title: 'Install Don''t Starve Together Game Server on Ubuntu 14.04'
-description: 'Install and Configure a Don''t Starve Together Multi-player Game Server for Ubuntu 14.04'
+title: "Install Don't Starve Together Game Server on Ubuntu 14.04"
+description: "Install and Configure a Don't Starve Together Multi-player Game Server for Ubuntu 14.04"
authors: ["Andrew Gottschling"]
contributors: ["Andrew Gottschling"]
published: 2015-04-14
modified: 2019-02-01
-keywords: ["don''t starve", "don''t starve together", "game servers", "games", "ubuntu", " ubuntu 14.04", "steam cmd", "steamcmd", "token"]
+keywords: ["don't starve", "don't starve together", "game servers", "games", "ubuntu", " ubuntu 14.04", "steam cmd", "steamcmd", "token"]
tags: ["debian", "ubuntu"]
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
aliases: ['/applications/game-servers/dont-starve-together-on-ubuntu/','/game-servers/install-dont-starve-together-game-server-on-ubuntu/','/applications/game-servers/install-dont-starve-together-game-server-on-ubuntu/']
dedicated_cpu_link: true
---
-
+
[Don’t Starve Together](https://www.kleientertainment.com/games/dont-starve-together) is a multiplayer game written and published by Klei Entertainment, and is a multiplayer add- on to their single-player game Don’t Starve. This guide will explain how to prepare your Linode and install, then configure, Don’t Starve Together.
diff --git a/docs/guides/game-servers/minecraft-with-bungee-cord/index.md b/docs/guides/game-servers/minecraft-with-bungee-cord/index.md
index 4627e2cb26d..de0ac922d29 100644
--- a/docs/guides/game-servers/minecraft-with-bungee-cord/index.md
+++ b/docs/guides/game-servers/minecraft-with-bungee-cord/index.md
@@ -222,7 +222,7 @@ Next, ensure that in your spigot.yml file you have set bungeecord to true
{{< /file >}}
-After, you set the right values for bungeecord and ip_forward, restart the Spigot servers to enable IP forwarding.
+After, you set the right values for `bungeecord` and `ip_forward`, restart the Spigot servers to enable IP forwarding.
## Troubleshooting
diff --git a/docs/guides/kubernetes/controlling-linode-lke-costs-using-kubecost/index.md b/docs/guides/kubernetes/controlling-linode-lke-costs-using-kubecost/index.md
index 9ea77903004..133cdd5e37a 100644
--- a/docs/guides/kubernetes/controlling-linode-lke-costs-using-kubecost/index.md
+++ b/docs/guides/kubernetes/controlling-linode-lke-costs-using-kubecost/index.md
@@ -76,7 +76,7 @@ A quality monitoring plan takes the functionality of your tools, business needs,
- Capture historical data to make it possible to predict future performance based on expected conditions at specific times.
- Ensure that the user experience remains unaffected by monitoring and cost-control efforts.
-A key component of a good monitoring plan is alerting. When configuring alerts, keep these priciples in mind:
+A key component of a good monitoring plan is alerting. When configuring alerts, keep these principles in mind:
- Engineer alerts so that the overall number of alerts are kept at a minimum.
- Consider when and who should receive alerts.
@@ -101,7 +101,7 @@ The following components must be in place prior to installing Kubecost:
### Installing Kubecost
{{< note title="Kubecost 2.0" >}}
-As of January 2024, the below instructions install Kubecost 2.0. See Kubecost's blog for more information about Kubecost 2.0, including functionality improvements: [Introducting Kubecost 2.0](https://blog.kubecost.com/blog/introducing-kubecost-2.0/)
+As of January 2024, the below instructions install Kubecost 2.0. See Kubecost's blog for more information about Kubecost 2.0, including functionality improvements: [Introducing Kubecost 2.0](https://blog.kubecost.com/blog/introducing-kubecost-2.0/)
{{< /note >}}
1. Navigate to the [Kubecost registration page](https://www.kubecost.com/install#show-instructions) and complete the sign up steps by entering your email address. Once complete, you are brought to a page with Kubecost installation instructions. The instructions include a `kubecostToken` that is required for installation.
@@ -147,7 +147,7 @@ OpenCost is free, whereas Kubecost offers freemium and paid versions with differ
[Loft](https://loft.sh/) is a control platform that operates on top of existing Kubernetes clusters. Loft works with individual clusters rather than residing outside clusters or relying on a separate engine. Loft has relatively simple setup and configuration processes, though it doesn't have the level of overview provided by Kubecost or OpenCost.
-Two areas of note for Loft are the sleep mode feature and accounting functionality. With sleep mode, Loft automatically puts idle namespaces to sleep based on user-provided critera, rather than only informing you of a cost problem. It also has the ability to delete namespances when they become old and unused. Accounting in Loft allows you to set quotas for each user, account, and team. Loft also offers enterprise-grade, multi-tenant access control, security, and fully automated tenant isolation, among other features.
+Two areas of note for Loft are the sleep mode feature and accounting functionality. With sleep mode, Loft automatically puts idle namespaces to sleep based on user-provided critera, rather than only informing you of a cost problem. It also has the ability to delete namespaces when they become old and unused. Accounting in Loft allows you to set quotas for each user, account, and team. Loft also offers enterprise-grade, multi-tenant access control, security, and fully automated tenant isolation, among other features.
### CAST AI
diff --git a/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md b/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md
index 20eff3e5995..f69537f3524 100644
--- a/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md
+++ b/docs/guides/kubernetes/deploy-minio-on-kubernetes-using-kubespray-and-ansible/index.md
@@ -16,7 +16,7 @@ external_resources:
- '[Kubespray](https://github.com/kubernetes-incubator/kubespray)'
---
-
+
## What is Minio?
diff --git a/docs/guides/platform/object-storage/replicate-bucket-contents-with-rclone/index.md b/docs/guides/platform/object-storage/replicate-bucket-contents-with-rclone/index.md
index 9cc555ab679..80d9ac77fdd 100644
--- a/docs/guides/platform/object-storage/replicate-bucket-contents-with-rclone/index.md
+++ b/docs/guides/platform/object-storage/replicate-bucket-contents-with-rclone/index.md
@@ -169,7 +169,7 @@ The first method below uses user data with our Metadata service during instance
When choosing a distribution image, select one of the versions of Ubuntu that is both supported by the script (see [Running the Script](#running-the-script)) and compatible with cloud-init (denoted with a note icon).
- When choosing a region, select a region where the Metadata servce is available. A list of data center availability for Metadata can be found in our [Overview of the Metadata Service](/docs/products/compute/compute-instances/guides/metadata/#availability) guide.
+ When choosing a region, select a region where the Metadata service is available. A list of data center availability for Metadata can be found in our [Overview of the Metadata Service](/docs/products/compute/compute-instances/guides/metadata/#availability) guide.
Stop when you get to the **Add User Data** section.
diff --git a/docs/guides/quick-answers/linux/how-to-use-fsck-to-fix-disk-problems/index.md b/docs/guides/quick-answers/linux/how-to-use-fsck-to-fix-disk-problems/index.md
index 5e8aad818fc..17a08dd2796 100644
--- a/docs/guides/quick-answers/linux/how-to-use-fsck-to-fix-disk-problems/index.md
+++ b/docs/guides/quick-answers/linux/how-to-use-fsck-to-fix-disk-problems/index.md
@@ -16,7 +16,7 @@ tags: ["linux"]
aliases: ['/quick-answers/linux/how-to-use-fsck-to-fix-disk-problems/']
---
-
+
This guide is part of a series on Linux commands and features. Not all commands may be relevant to Linode-specific hardware, and are included here to provide an easy to access reference for the Linux community. If you have a command or troubleshooting tip that would help others, please submit a pull request or comment.
diff --git a/docs/guides/security/ssh/how-to-use-yubikey-for-two-factor-ssh-authentication/index.md b/docs/guides/security/ssh/how-to-use-yubikey-for-two-factor-ssh-authentication/index.md
index ad765a15918..43ccb64f401 100644
--- a/docs/guides/security/ssh/how-to-use-yubikey-for-two-factor-ssh-authentication/index.md
+++ b/docs/guides/security/ssh/how-to-use-yubikey-for-two-factor-ssh-authentication/index.md
@@ -15,7 +15,7 @@ external_resources:
- '[Official Yubico PAM Module Documentation](https://developers.yubico.com/yubico-pam/)'
---
-
+
## What is Yubikey?
diff --git a/docs/guides/tools-reference/file-transfer/how-to-use-scp/index.md b/docs/guides/tools-reference/file-transfer/how-to-use-scp/index.md
index 61173924031..9514c4cbba5 100644
--- a/docs/guides/tools-reference/file-transfer/how-to-use-scp/index.md
+++ b/docs/guides/tools-reference/file-transfer/how-to-use-scp/index.md
@@ -2,10 +2,11 @@
slug: how-to-use-scp
title: "Transfer Files With the scp Command on Linux"
title_meta: "How to Transfer Files With the scp Command on Linux"
-description: 'Learn how to transfer files using SCP on Linux, and how SCP compares to other means of transferring files.'
+description: "Learn how to transfer files using SCP on Linux, and how SCP compares to other means of transferring files."
authors: ["Jeff Novotny"]
-contributors: ["Jeff Novotny"]
+contributors: ["Jeff Novotny", "Adam Overa"]
published: 2023-03-14
+modified: 2024-05-01
keywords: ['Scp command','Scp linux','Scp syntax','Scp example']
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
external_resources:
@@ -13,49 +14,68 @@ external_resources:
- '[TFTP RFC 1350](https://datatracker.ietf.org/doc/html/rfc1350)'
---
-Copying files to a remote computer is a very common task. There are many programs and utilities to accomplish this task, but not all of them are secure. A popular choice for more quickly and securely copying files is the Secure Copy Protocol (SCP). This guide describes how SCP works and explains how to use the `scp` command on Linux distributions. It also provides several `scp` examples, demonstrating several different scenarios.
+Copying files to a remote computer is a very common task. While there are many programs and utilities to accomplish this, not all of them are secure. A popular choice to more quickly and securely copy files is the Secure Copy Protocol (SCP). This guide describes how SCP works and explains how to use the `scp` command on Linux distributions. It also provides several `scp` examples, demonstrating several different scenarios.
## An Introduction to SCP
SCP is a way to transfer files with a reasonably high level of security. It allows users to copy files between their local server and a remote system, leaving the original in place. SCP can both upload and download files. It also allows users to copy over entire directories of files. As an extra convenience, it can even copy files between two different remote systems.
-SCP refers to both the protocol and the `scp` Linux utility. SCP replaced the original `rcp` command, which is no longer considered secure. It is not defined in an RFC, but most Linux distributions have "man" pages describing how to use it. For example, Ubuntu includes a [scp man page](http://manpages.ubuntu.com/manpages/focal/man1/scp.1.html).
+SCP refers to both the protocol and the `scp` Linux utility. SCP replaced the original `rcp` command, which is no longer considered secure. It is not defined in an RFC, but most Linux distributions have documentation or a "man page" describing how to use it. For example, Ubuntu includes a [scp man page](http://manpages.ubuntu.com/manpages/focal/man1/scp.1.html).
Before transferring the files, the client establishes an SCP connection to the remote server. By default, SCP connects using Transport Control Protocol (TCP) port `22`. The remote server then invokes the SCP process. SCP can operate in one of two modes:
-- **Source mode**: Source mode accesses the requested source file from the file system and transmits it back to the client.
-- **Sink mode**: Sink mode accepts the file from the client and saves it to the specified directory.
+- **Source Mode**: Source mode accesses the requested source file from the file system and transmits it back to the client.
+- **Sink Mode**: Sink mode accepts the file from the client and saves it to the specified directory.
SCP uses the *Secure Shell* (SSH) protocol as a base layer. SSH authenticates the user and encrypts the data for transfer. In addition to encrypting the file contents, SCP also encrypts all passwords. Because the files are encrypted, they cannot be accessed via a man-in-the-middle attack.
SCP also supports remote-to-remote mode. Originally, SCP established a direct connection between the remote source and the remote destination. This allowed data to pass between the two nodes without having to pass through the local host. But in most recent releases, data is routed through the originating node as the default. This is more secure but is also less efficient.
-SCP is designed for speed and efficiency. It is considered a solid, reliable, and straightforward way to copy files. But it is very basic in its functionality, and some security analysts have criticized it as inflexible and limited. For example, SCP does not interact properly with interactive shell profiles. SSH profile messages can also cause errors or connection failures. SCP does not allow users to list, delete, or rename files. Because of its limited functionality, some experts recommend SFTP and `rsync` instead.
+SCP is designed for speed and efficiency. It is considered a solid, reliable, and straightforward way to copy files. However, it is very basic in its functionality. Some security analysts have criticized it as inflexible and limited. For example, SCP does not interact properly with interactive shell profiles. SSH profile messages can also cause errors or connection failures. SCP does not allow users to list, delete, or rename files. Because of its limited functionality, some experts recommend SFTP and `rsync` instead.
{{< note >}}
No single protocol can be considered completely secure on its own. Before handling extremely sensitive data, consult with a security expert.
{{< /note >}}
-## The Differences Between SCP and SFTP
+### The Differences Between SCP and SFTP
-SCP and the SSH File Transfer Protocol (SFTP) are two alternative methods of more securely copying files between different systems. Both protocols have their advantages. However, either option can be used in most cases. Following are some of the similarities and differences between the two systems.
+Both SCP and the SSH File Transfer Protocol (SFTP) are methods to more securely copy files between different systems. While both protocols have their own advantages, either option can be used in most cases. The following list highlights some of the similarities and differences between the two systems:
-- Both SCP and the SSH File Transfer Protocol (SFTP) are considered more secure than legacy protocols like FTP.
-- Both protocols use TCP as their transport protocol. They both use port `22` by default.
-- Both protocols rely on SSH for encryption and public key authentication. However, SCP only uses SSH as a supporting layer while SFTP is based on SSH. SSH is better integrated with SFTP than it is with SCP.
-- Older releases of SCP had some security vulnerabilities. For instance, attackers could compromise an SCP server. However, new versions of SCP have fixed these issues. These issues were never present in SFTP.
-- SCP is typically faster than SFTP. It uses a more efficient algorithm to transfer the files.
-- SCP is optimized for one-time file transfers and works well with shell scripts.
-- SCP works better on Linux systems, while SFTP is the standard for Windows.
-- SCP is non-interactive, but SFTP permits interactive sessions. SFTP allows users to pause and resume file transfers.
-- SFTP has additional file management features. It allows users to list, delete, and rename files. SCP is a simpler protocol that can only perform basic file transfers.
+- Both SCP and the SSH File Transfer Protocol (SFTP) are considered more secure than legacy protocols like FTP.
+- Both protocols use TCP as their transport protocol. They both use port `22` by default.
+- Both protocols rely on SSH for encryption and public key authentication. However, SCP only uses SSH as a supporting layer while SFTP is based on SSH. SSH is better integrated with SFTP than it is with SCP.
+- Older releases of SCP had some security vulnerabilities. For instance, attackers could compromise an SCP server. However, new versions of SCP have fixed these issues. Such issues were never present in SFTP.
+- SCP is typically faster than SFTP. It uses a more efficient algorithm to transfer the files.
+- SCP is optimized for one-time file transfers and works well with shell scripts.
+- SCP works better on Linux systems, while SFTP is the standard for Windows.
+- SCP is non-interactive, but SFTP permits interactive sessions. SFTP allows users to pause and resume file transfers.
+- SFTP has additional file management features. It allows users to list, delete, and rename files. SCP is a simpler protocol that can only perform basic file transfers.
## Before You Begin
-1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides.
+1. If you have not already done so, create a Linode account and at least two Compute Instances. See our [Getting Started with Linode](/docs/products/platform/get-started/) and [Creating a Compute Instance](/docs/products/compute/compute-instances/guides/create/) guides.
1. Follow our [Setting Up and Securing a Compute Instance](/docs/products/compute/compute-instances/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+1. Create the following example files and directories on your local machine:
+
+ ```command {title="Local Machine"}
+ mkdir ~/example_archive ~/example_directory
+ touch ~/example_directory/file1.txt ~/example_directory/file2.txt ~/example_directory/file3.txt
+ ```
+
+1. Log in to the first instance and create the following example directory:
+
+ ```command {title="Instance #1"}
+ mkdir ~/example_backup1
+ ```
+
+1. Log in to the second instance and create the following example directory:
+
+ ```command {title="Instance #2"}
+ mkdir ~/example_backup2
+ ```
+
{{< note >}}
The steps in this guide are written for non-root users. Commands that require elevated privileges are prefixed with `sudo`. If you are not familiar with the `sudo` command, see the [Linux Users and Groups](/docs/guides/linux-users-and-groups/) guide.
{{< /note >}}
@@ -73,9 +93,11 @@ The `scp` syntax is not as complex as it first appears. A typical `scp` command
The `hostid` can be either a hostname or an IP address, and the remote `file` or `directory` must specify the full path.
-The `scp` command syntax follows this format, with optional components enclosed in square brackets `[]`.
+The `scp` command syntax follows this format, with optional components enclosed in square brackets `[]`:
- scp [options] [source_username@source_host:]source_file [dest_userid@dest_host:]destination_dir
+```command
+scp [options] source_username@source_host:source_file dest_userid@dest_host:destination_dir
+```
The `scp` command permits users to choose from a list of options. The most common `scp` options are as follows:
@@ -84,123 +106,161 @@ The `scp` command permits users to choose from a list of options. The most commo
- **-l**: Set a bandwidth limit for the file transfer.
- **-P**: Use the specified port for SSH.
- **-p**: Copy over the file modification and access time.
-- **-q**: Use quiet mode. Quiet mode suppresses the progress meter and informational messages. Error messages are still printed.
+- **-q**: Use quiet mode. Quiet mode suppresses the progress meter and informational messages, though error messages are still printed.
- **-r**: Copy directories recursively.
- **-v**: Print debug messages.
-To use `scp`, the user must have read access for the files they are transferring and write permission on the destination directory. For authentication purposes, either an SSH key or user password is required for the destination. For more information on SSH, see the Linode guide to [Connecting to a Remote Server Over SSH on Linux](/docs/guides/connect-to-server-over-ssh-on-linux/).
+To use `scp`, the user must have read access for the files they are transferring and write permission on the destination directory. For authentication purposes, either an SSH key or user password is required for the destination. For more information on SSH, see our guide on [Connecting to a Remote Server Over SSH on Linux](/docs/guides/connect-to-server-over-ssh-on-linux/).
{{< note type="alert" >}}
-Exercise a high degree of caution when using `scp`. It does not provide any warnings or ask for confirmation before overwriting an existing file sharing the same name. It is very easy to accidentally overwrite files or directories, especially when using `scp` in recursive mode.
+Exercise a high degree of caution when using `scp`. It does not provide any warnings or ask for confirmation before overwriting an existing file with the same name. It is very easy to accidentally overwrite files or directories, especially when using `scp` in recursive mode.
{{< /note >}}
-### How to Transfer Files from a Local System to a Remote Server Using SCP?
+## How to Transfer Files from a Local System to a Remote Server Using SCP
The following principles apply when using `scp` to copy a file from the local host to a remote server:
-- Use the syntax `scp [options] local_directory/local_filename remote_username@remote_host:remote_target_directory`.
+- Use the syntax `scp [options] local_directory/local_filename remote_username@remote_hostid:remote_target_directory`.
- The full path of the remote directory must be specified. The local path can be either relative or absolute.
- Include the name of the user account for the remote system.
-- The host can be identified by either its name or its IP address.
+- The host can be identified by either its name or IP address.
- A username or account is not required for the local file.
- The user must have read access to the files being transferred, and write access to the destination directory.
-The following example copies a file named `file1.txt` to the `/backup` directory on the destination server. It specifies a username for the destination server, along with an IP address for the destination. To use the `scp` command on Linux to transfer a local file, follow the steps below:
+The following example copies a file named `file1.txt` to the `/example_backup1` directory on the destination server. It specifies a username for the destination server, along with an IP address for the destination. To use the `scp` command on Linux to transfer a local file, follow the steps below:
-1. Enter the SCP information using the name of the local file and full details for the remote server.
+Enter the SCP information using the name of the local file and full details for the remote server:
- scp tmpdir/file1.txt remoteuser@192.0.2.254:backup
+```command {title="Local Machine"}
+scp example_directory/file1.txt REMOTE_USER_1@REMOTE_IP_ADDRESS_1:example_backup1
+```
-1. Unless SSH is already using public keys, it prompts for a password for the remote system. Enter the password at the prompt.
+Enter the remote user password if prompted. The system displays a progress bar indicating the amount of data that has been transferred. When the progress bar reaches `100%`, the transfer is complete:
- {{< output >}}
- remoteuser@192.0.2.254's password
- {{< /output >}}
+```output
+file1.txt 100% 0 0.0KB/s 00:00
+```
-1. The system displays a progress bar indicating the amount of data that has been transferred. When the progress bar reaches `100%`, the transfer is complete.
+Access the remote server to confirm the file is now present.
- {{< output >}}
- file1.txt 100% 1000 313.4KB/s 00:00
- {{< /output >}}
+### Rename Copied Files
-1. (**Optional**) Access the remote server and confirm the file is now present.
+To give the file a new name on the destination server, append a new name to the target directory. This command renames the copy of `file1.txt` to `file1.bak` on the destination server.
-To give the file a new name on the destination server, append a new name to the target directory. This command renames the copy of `file1.txt` to `file100.txt` on the destination server.
+```command {title="Local Machine"}
+scp example_directory/file1.txt REMOTE_USER_1@REMOTE_IP_ADDRESS_1:example_backup1/file1.bak
+```
- scp tmpdir/file1.txt remoteuser@192.0.2.254:backup/file100.txt
+```output
+file1.txt 100% 0 0.0KB/s 00:00
+```
-Place any options between the `scp` command keyword and the name of the local file. The example below uses the `-r` option to recursively copy all the files from the local `tmpdir` directory to the destination directory.
+### Copy Directory and All Files
- scp -r tmpdir remoteuser@192.0.2.254:backup
+Place any options between the `scp` command keyword and the name of the local file. The example below uses the `-r` option to recursively copy the local `example_directory` directory and all of its files to the destination directory:
-{{< output >}}
-file1.txt 100% 1000 1.6MB/s 00:00
-newfile.txt 100% 1000 564.6KB/s 00:00
-newfile.txt.bak 100% 1000 1.9MB/s 00:00
-{{< /output >}}
+```command {title="Local Machine"}
+scp -r example_directory REMOTE_USER_1@REMOTE_IP_ADDRESS_1:example_backup1
+```
-The default TCP port `22` can be overridden using the `-P` flag, for example, `scp -P 2000`. To execute the original `scp` example using port `2000`, use the following command:
+```output
+file1.txt 100% 0 0.0KB/s 00:00
+file2.txt 100% 0 0.0KB/s 00:00
+file3.txt 100% 0 0.0KB/s 00:00
+```
+
+### Copy Using a Different Port
+
+The default TCP port `22` can be overridden using the `-P` flag, for example, `scp -P 2000`. To execute the `scp` example using port `2000`, use the following command:
+
+```command {title="Local Machine"}
+scp -P 2000 example_directory/file2.txt REMOTE_USER_1@REMOTE_IP_ADDDRESS_1:example_backup1/file2.bak
+```
+
+```output
+file2.txt 100% 0 0.0KB/s 00:00
+```
{{< note >}}
SCP must be running on the specified port on the destination server.
{{< /note >}}
- scp -P 2000 tmpdir/file1.txt remoteuser@192.0.2.254:backup/file2.txt
+### Transfer Multiple Files
-The `scp` program can efficiently transfer multiple files at the same time. The following command copies `file1.txt` and `file2.txt` to the `backup` directory on the remote server.
+The `scp` program can efficiently transfer multiple files at the same time. The following command copies `file2.txt` and `file3.txt` to the `example_backup1` directory on the remote server.
- scp file1.txt file2.txt remoteuser@192.0.2.254:backup
+```command {title="Local Machine"}
+scp ~/example_directory/file2.txt ~/example_directory/file3.txt REMOTE_USER_1@REMOTE_IP_ADDRESS_1:example_backup1
+```
-### How to Transfer Files from a Remote System to a Local System Using SCP?
+```output
+file2.txt 100% 0 0.0KB/s 00:00
+file3.txt 100% 0 0.0KB/s 00:00
+```
-Transferring files from a remote system to the local system uses the same `scp` command. However, the remote server details are specified first. Enter the remote username, server details, source directory, and filename after any options. Then, specify the directory on the local host to copy the file to. The `scp` command follows the format `scp remote_userid@remote_host:remoteSourceDirectory/SourceFile local_directory`.
+## How to Transfer Files from a Remote System to a Local System Using SCP
-To copy a file from a remote system to the local system, follow the steps below. This example copies `file1.txt` from the `backup` directory of the destination system to the `archive` directory on the local computer.
+Transferring files from a remote system to the local system uses the same `scp` command. However, the remote server details are specified first. Enter the remote username, server details, source directory, and filename after any options. Then, specify the directory on the local host to copy the file to. The `scp` command follows the format `scp remote_userid@remote_host:remoteSourceDirectory/SourceFile local_directory`.
-1. Use the `scp` command to specify the username, identifier, and the full path to the file to transfer to the destination system. Then indicate the destination directory.
+To copy a file from a remote system to the local system, follow the steps below. This example copies `file1.txt` from the `example_backup1` directory of the destination system to the `example_archive` directory on the local computer.
- scp remoteuser@192.0.2.254:backup/file1.txt archive
+Use the `scp` command to specify the username, identifier, and the full path of the file to transfer, then indicate the destination directory:
-1. When requested, enter the password for the remote system.
+```command {title="Local Machine"}
+scp REMOTE_USER_1@REMOTE_IP_ADDDRESS_1:example_backup1/file1.txt example_archive
+```
- {{< output >}}
- remoteuser@192.0.2.254's password
- {{< /output >}}
+If requested, enter the password for the remote system. When the progress bar indicates the transfer is `100%` complete, the file has been transferred:
-1. When the progress bar indicates the transfer is `100%` complete, the file has been transferred.
+```output {title="Local Machine"}
+file1.txt 100% 0 0.0KB/s 00:00
+```
- {{< output >}}
- file1.txt 100% 1000 2.2MB/s 00:00
- {{< /output >}}
+Confirm the file is now present on the local system.
-1. Confirm the file is now present on the local system.
+The same options and syntax used when transferring a file to a remote system can also be used here. The following example recursively copies the `/example_backup1` directory and its entire contents on the remote system to the local system:
-The same options and syntax used when transferring a file to a remote system can also be used here. The following example recursively copies the entire contents of the `/backup` directory on the remote system to the local system.
+```command {title="Local Machine"}
+scp -r REMOTE_USER_1@REMOTE_IP_ADDDRESS_1:example_backup1 example_archive
+```
- scp -r remoteuser@192.0.2.254:backup archive
+```output
+file1.bak 100% 0 0.0KB/s 00:00
+file1.txt 100% 0 0.0KB/s 00:00
+file1.txt 100% 0 0.0KB/s 00:00
+file2.bak 100% 0 0.0KB/s 00:00
+file2.txt 100% 0 0.0KB/s 00:00
+file2.txt 100% 0 0.0KB/s 00:00
+file3.txt 100% 0 0.0KB/s 00:00
+file3.txt 100% 0 0.0KB/s 00:00
+```
-### How to Transfer Files Between Two Remote Systems Using SCP?
+## How to Transfer Files Between Two Remote Systems Using SCP
The `scp` utility has an unexpected benefit that is not as widely known. It allows users to transfer files between two different remote servers from a third host. The command works the same way, except login and host details are required for both the source and destination servers. After entering the command, `scp` prompts for any required passwords.
- scp remoteuser@192.0.2.254:backup/file2.txt remoteuser2@192.0.2.251:secondarybackup/file3.txt
+```command {title="Local Machine"}
+scp REMOTE_USER_1@REMOTE_IP_ADDDRESS_1:example_backup1/file1.txt REMOTE_USER_2@REMOTE_IP_ADDDRESS_2:example_backup2/file1.bak
+```
-Some older implementations of `scp` transfer files directly between the source and destination routers. Traffic does not pass through the local host. For added security, traffic is routed through the local machine by default in more recent releases. To force traffic to be transferred through the local machine, use the `-3` option.
+Some older implementations of `scp` transfer files directly between the source and destination routers. Traffic does not pass through the local host. In more recent releases, traffic is routed through the local machine by default for added security. To force traffic to be transferred through the local machine, include the `-3` option:
- scp -3 remoteuser@192.0.2.254:backup/file2.txt remoteuser2@192.0.2.251:secondarybackup/file3.txt
+```command {title="Local Machine"}
+scp -3 REMOTE_USER_1@REMOTE_IP_ADDDRESS_1:example_backup1/file2.txt REMOTE_USER_2@REMOTE_IP_ADDDRESS_2:example_backup2/file2.bak
+```
{{< note >}}
-The source and destination systems might require an SSH key to authenticate. If `scp` displays any authentication errors, generate an SSH key on the source and share it with the destination server. Then try the command again. The local host and the source server should not require a shared SSH key and can authenticate using a password.
+The source and destination systems both require the local machine's SSH key to authenticate. If `scp` displays any authentication errors, ensure your local machine's SSH key included in the `~/.ssh/authorized_keys` files of both remote machines.
{{< /note >}}
## Use Cases for SCP
-SCP is a straightforward copy utility. It works very efficiently and transfers files quickly. But it does not offer many options and does not work in interactive mode. Nor does it offer any management tools, such as the ability to list remote directories or delete files.
+SCP is a straightforward utility that quickly and efficiently transfers files. However, it does not offer many options and does not work in interactive mode. Nor does it offer any management tools, such as the ability to list remote directories or delete files.
The main use case for SCP is for one-time transfers where speed is important. It is not as useful for more complicated tasks. In those cases, try SFTP instead.
-## Concluding Thoughts About SSH
+## Conclusion
The SCP Linux utility is a more secure alternative to traditional applications like FTP. It can copy files between a local host and a remote server, or between two remote servers. The SCP protocol uses SSH as an underlying layer for authentication and encryption. SCP and SFTP are two methods of transferring files between servers. SCP is faster and simpler, while the more fully-featured SFTP provides an interactive mode and more management options.
-On Linux systems, use the `scp` command to transfer files. Although it has a handful of options, `scp` is very straightforward to use. Details about the source file must be specified first, then information about the destination directory. To authenticate with a remote server, a username and host information must be included. Multiple files can be transferred at the same time, and directories can be recursively copied. For more information about the Linux `scp` command, consult the [Ubuntu man page for scp](http://manpages.ubuntu.com/manpages/focal/man1/scp.1.html).
+Linux systems can use the `scp` command to transfer files. Although it has a handful of options, `scp` is very straightforward to use. Details about the source file must be specified first, then information about the destination directory. To authenticate with a remote server, a username and host information must be included. Multiple files can be transferred at the same time, and directories can be recursively copied. For more information about the Linux `scp` command, consult the [Ubuntu man page for scp](http://manpages.ubuntu.com/manpages/focal/man1/scp.1.html).
\ No newline at end of file
diff --git a/docs/guides/tools-reference/tools/download-resources-from-the-command-line-with-wget/index.md b/docs/guides/tools-reference/tools/download-resources-from-the-command-line-with-wget/index.md
index a6d1c722e73..348f4ba9ecd 100644
--- a/docs/guides/tools-reference/tools/download-resources-from-the-command-line-with-wget/index.md
+++ b/docs/guides/tools-reference/tools/download-resources-from-the-command-line-with-wget/index.md
@@ -12,7 +12,7 @@ aliases: ['/tools-reference/tools/download-resources-from-the-command-line-with-
tags: ["linux"]
---
-
+
## What is wget?
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/example-nextjs-app.png b/docs/guides/tools-reference/tools/load-testing-with-jmeter/example-nextjs-app.png
new file mode 100644
index 00000000000..954984d2543
Binary files /dev/null and b/docs/guides/tools-reference/tools/load-testing-with-jmeter/example-nextjs-app.png differ
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/index.md b/docs/guides/tools-reference/tools/load-testing-with-jmeter/index.md
new file mode 100644
index 00000000000..bd0f13f34c5
--- /dev/null
+++ b/docs/guides/tools-reference/tools/load-testing-with-jmeter/index.md
@@ -0,0 +1,262 @@
+---
+slug: load-testing-with-jmeter
+title: "How to Use JMeter to Load Test Your Applications"
+description: "Apache's JMeter is a robust open source tool for load testing web applications. Learn everything you need to get started using JMeter in this tutorial."
+authors: ['Nathaniel Stickman']
+contributors: ['Nathaniel Stickman']
+published: 2024-05-01
+keywords: ['jmeter load testing','jmeter download','jmeter tutorial']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[Apache JMeter: Getting Started](https://jmeter.apache.org/usermanual/get-started.html)'
+- '[Wikipedia: Software Load Testing](https://en.wikipedia.org/wiki/Software_load_testing)'
+- '[Microsoft Learn: Performance Testing Guidance for Web Applications - Types of Performance Testing](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/bb924357(v=pandp.10))'
+---
+
+Apache's JMeter offers a robust and portable open source solution for load testing. JMeter can measure the performance of web applications and provide insights about the kinds of loads they can handle.
+
+Through this tutorial, learn more about load testing and how to get started using JMeter to load test web applications.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.
+
+{{< note >}}
+This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/guides/linux-users-and-groups/) guide.
+{{< /note >}}
+
+The commands, file contents, and other instructions provided throughout this guide may include example values. These are typically domain names, IP addresses, usernames, passwords, and other values that are unique to you. The table below identifies these example values and explains what to replace them with:
+
+| Example Values: | Replace With: |
+| -- | -- |
+| {{< placeholder "EXAMPLE_USER" >}} | The username of the current user on your local machine. |
+| {{< placeholder "apache-jmeter-5.6.2" >}} | The actual version of JMeter you download, if different. |
+
+## What Is Load Testing?
+
+*Load testing* is a type of performance testing used specifically to ensure that web applications perform effectively under projected usage loads. Load tests help anticipate application performance and resource needs, which can aid in finding the best solutions for an uninterrupted user experience.
+
+Load tests most often use test plans to model both normal and peak traffic along with typical user behavior. From there, the load test should provide metrics on response times, resource usage, and more.
+
+The data gathered from load tests can be invaluable. It can show whether your infrastructure is enough to handle the expected traffic, or if you have more resources than you actually need.Using load testing to fine-tune your infrastructure and allocated resources can ensure more consistent user experiences and save money.
+
+## Why Use JMeter?
+
+[Apache JMeter](https://jmeter.apache.org/index.html) is an open source load testing tool built entirely in Java. It's widely used, relatively easy to get started with, and serves as robust option for measuring and analyzing web application performance under various loads. It's also open source, which makes it both accessible and price efficient, all while having the community support that comes with a popular open source solution.
+
+## How to Install JMeter
+
+JMeter should be installed on the machine from which you want to run tests, not necessarily the machine running the application you want tested. Additionally, JMeter strongly prefers using its GUI to create test plans, so whatever system it's installed on should have GUI access.
+
+1. Install Java. Java version 8 or higher is required for JMeter. Use one of the following methods to install Java on your system:
+
+ - Oracle provides [installation documentation](https://www.java.com/en/download/help/download_options.html) for installing its version of Java on a variety of systems. Download the corresponding installation files through Oracle's [Java downloads](https://www.java.com/en/download/manual.jsp) page.
+
+ - Install the [OpenJDK](https://openjdk.org/), an open source alternative to Oracle's JDK. Follow the instructions on the [OpenJDK installation](https://openjdk.org/install/) page to install OpenJDK on your system.
+
+ {{< note type="secondary">}}
+ You can install the OpenJDK conveniently on macOS through the [Homebrew](https://brew.sh/) package manager, using the [openjdk](https://formulae.brew.sh/formula/openjdk) formula. Those on newer Apple Silicon-based models may need this additional line of code:
+
+ ```command {title="macOS"}
+ sudo ln -sfn /opt/homebrew/opt/openjdk/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk
+ ```
+ {{< /note >}}
+
+1. Download the JMeter binaries package from the [Apache JMeter download page](https://jmeter.apache.org/download_jmeter.cgi).
+
+ This tutorial uses the zip file with the binaries for JMeter 5.6.2. If you download a different version be sure to replace the naming throughout this guide with the appropriate version.
+
+1. **Optional:** To verify the file, use the link for the SHA512 corresponding to your download. Then follow the commands in the [Checking Hashes](https://www.apache.org/info/verification.html#CheckingHashes) section of Apache's documentation on verifying Apache software.
+
+1. Extract the downloaded package to a directory you want your JMeter instance installed to. This tutorial assumes that you extract the package to your current user's home directory (e.g. `/home/{{< placeholder "EXAMPLE_USER" >}}/{{< placeholder "apache-jmeter-5.6.2" >}}` or `C:\Users\{{< placeholder "EXAMPLE_USER" >}}\{{< placeholder "apache-jmeter-5.6.2" >}}`).
+
+1. Locate the JMeter executable for your system within the `bin` subdirectory of the extracted directory.
+
+ - On Linux & macOS, the executable is `jmeter`.
+
+ - On Windows, the executable is `jmeter.bat`.
+
+1. Use the command line to run the executable file for JMeter. The example commands below to start up JMeter assume you installed version 5.6.2 within the current user's home directory.
+
+ {{< tabs >}}
+ {{< tab "Linux & macOS" >}}
+ ```command
+ ~/{{< placeholder "apache-jmeter-5.6.2" >}}/bin/jmeter
+ ```
+ {{< /tab >}}
+ {{< tab "Windows" >}}
+ ```command
+ C:\Users\{{< placeholder "EXAMPLE_USER" >}}\{{< placeholder "apache-jmeter-5.6.2" >}}\bin\jmeter.bat
+ ```
+ {{< /tab >}}
+ {{< /tabs >}}
+
+You should see the JMeter GUI start up:
+
+
+
+{{< note >}}
+For Linux and macOS, you can follow our guide on how to [Add a Directory to the PATH on Linux](/docs/guides/how-to-add-directory-to-path/). Add the `bin` directory to your shell path to start up JMeter with the simpler `jmeter` command.
+{{< /note >}}
+
+## How to Start Load Testing with JMeter
+
+You can now start using JMeter to load test web application. JMeter's GUI provides a set of tools for building a test plan, including recording browser actions for modeling user behavior. Afterward, execute the plan using JMeter's CLI and see how your applications respond to testing.
+
+To help get started, this tutorial also includes steps for creating a simple web application to run JMeter against. This provides a clear demonstration of how JMeter works so you can try it out before using JMeter on your own applications.
+
+### Preparing an Example Application
+
+To create a base web application to test with JMeter, follow the steps here on an application server. These steps specifically assume a Linode Compute Instance server is used. The instructions should work with most Debian-based and RHEL-derived distributions.
+
+1. Follow our guide on how to [Install and Use the Node Package Manager (NPM) on Linux](/docs/guides/install-and-use-npm-on-linux/). NPM handles the installation of the application framework and its dependencies as well as running the example application itself.
+
+1. Next.js works well for this example as it can create a base web application with only a few commands. Use the commands here to create a base Next.js project named `example-app` using the `create-next-app` executor. These commands put the application in the current user's home directory and then changes into the new application directory.
+
+ ```command {title="Linode Instance Terminal"}
+ cd ~
+ npx create-next-app example-app
+ cd example-app
+ ```
+
+ Answer the prompts however you like or simply stick with the default values.
+
+ Learn more about building web applications with Next.js in our guide [Getting Started with Next.js](/docs/guides/getting-started-next-js/).
+
+1. Open port `3000` on your system's firewall. This is the default port for the example Next.js application. This port needs to be open in order for your browser and JMeter to access the application.
+
+ - For **Debian-based** distributions refer to our guide on [How to Configure a Firewall with UFW](/docs/guides/configure-firewall-with-ufw/).
+
+ - For **RHEL-derived** distributions refer to our guide on [Enabling and Configuring FirewallD on CentOS](/docs/guides/introduction-to-firewalld-on-centos/).
+
+1. Start up the Next.js application. This runs the included "Welcome" application on a development server. While this should not be used for production applications, it works well to demonstrate JMeter's capabilities.
+
+ ```command {title="Linode Instance Terminal"}
+ npm run dev
+ ```
+
+1. To verify that the example application is running, open a web browser and navigate to port `3000` on your system's public IP address. For example, if your system's public IP address is `192.0.2.17`, you would navigate to `http://192.0.2.17:3000`.
+
+ 
+
+### Creating a JMeter Test Plan
+
+JMeter test plans are created within the JMeter GUI. From there, JMeter provides a range of tools for specifying how web applications should be accessed and tested. JMeter even includes a recording feature for recording user behavior using browser actions.
+
+Learn more about building test plans in JMeter's [Building a Test Plan](https://jmeter.apache.org/usermanual/build-test-plan.html) documentation. For the particular kind of test plan shown here, also refer to JMeter's [Building a Web Test Plan](https://jmeter.apache.org/usermanual/build-web-test-plan.html) documentation.
+
+The example test plan developed in this section specifically models web application access by several simultaneous users. The test verifies that users are able to access the page and that the page delivers the expected content. From this base test, you can easily expand both the number of modeled users and extent of scenarios to fit your particular needs.
+
+1. In the JMeter GUI, choose **File** > **Templates** from the top menu bar.
+
+1. Select **Building a Web Test Plan** from the dropdown then click the **Create** button. This creates a test plan from a template specifically for testing web applications.
+
+1. The left pane should now have a test plan named **build-web-test-plan**. Under it, select the **Scenario 1** item.
+
+ Doing so opens a **Thread Group** form. A thread group essentially defines a group of modeled users to run against a web application.
+
+ For this example test plan, complete the form as follows and leave any values not mentioned here at their defaults:
+
+ - **Number of Threads (users)**: `50`
+
+ - **Ramp-up period (seconds)**: `1`
+
+ - **Loop Count**: `5`
+
+ - **Duration (seconds)**: `30`
+
+ 
+
+1. Use the arrow to expand **Scenario 1** to reveal HTTP and other configurations. These define how the modeled users (threads) should interact with the web application.
+
+1. Select the **HTTP Requests Default** item. This sets up the default values for HTTP requests made by the modeled users. Use the values listed below, leaving anything not mentioned at its default:
+
+ - **Server Name or IP**: The public IP address or domain name for your web application (e.g. `192.0.2.45`)
+
+ - **Port Number**: `3000`
+
+ 
+
+1. **Optional:** Select the **Home Page** item. This defines a specific HTTP request to be made by the modeled users. For this example, you do not need to change anything. However, it's helpful to familiarize yourself with this item to understand how to customize it later.
+
+ For example, you would likely want similar HTTP request items, spaced with waiting intervals, for each page on your web application. These would then model a user's journey through the application.
+
+ Should you want to add a new HTTP request item, you could do so with the following steps:
+
+ 1. Right-click (or control-click on macOS) the thread group, named **Scenario 1** in the template.
+
+ 1. Select **Add** > **Sampler** > **HTTP Request** from the menu.
+
+ 1. Select the resulting item from the left menu, and use the form to customize it to your needs.
+
+1. Nested beneath the **Home Page** item is an **Assertion** item. Select this to get a **Response Assertion** form, which has JMeter look for a particular feature in the response.
+
+ Under **Patterns to Test**, remove the default content and replace it with the following:
+
+ ```command
+ src/pages/index.tsx
+ ```
+
+ 
+
+ This establishes a test condition that when a modeled user accesses the home page, they should have the given text within the response.
+
+1. When finished, use the **Save** option from the top toolbar or from the **File** menu to save the test plan. For this example, the test plan is saved as:
+
+ - On Linux & macOS: `~/build-web-test-plan.jmx`
+
+ - On Windows: `C:\Users\{{< placeholder "EXAMPLE_USER" >}}\build-web-test-plan.jmx`
+
+The test plan is now ready to run. Exit the JMeter GUI, and continue on to the next section to see how the test plan performs.
+
+### Running the JMeter Load Test
+
+To start running a load test with JMeter, you need to use its command-line interface (CLI). Access the CLI just as you would the GUI, but add the `-n` option to the command.
+
+There are a few other command-line options you should leverage to help effectively run load tests with JMeter:
+
+- `-t`: designates the location of your test plan
+
+- `-l`: designates a location for a log file
+
+- `-e`: tells JMeter to create a report
+
+- `-o`: designates a directory to store the report in
+
+Use all of these together to run a load test using the test plan developed above. The example command here assumes the JMeter installation and the test plan are stored as described further above. Additionally, the command creates a log file in the same directory as the test plan, along with another directory for the report.
+
+{{< tabs >}}
+{{< tab "Linux & macOS" >}}
+```command
+~/{{< placeholder "apache-jmeter-5.6.2" >}}/bin/jmeter -n -t ~/build-web-test-plan.jmx -l ~/build-web-test-logs.jtl -e -o ~/build-web-test-reports/
+```
+{{< /tab >}}
+{{< tab "Windows" >}}
+```command
+C:\Users\{{< placeholder "EXAMPLE_USER" >}}\{{< placeholder "apache-jmeter-5.6.2" >}}\bin\jmeter -n -t C:\Users\{{< placeholder "EXAMPLE_USER" >}}\build-web-test-plan.jmx -l
+C:\Users\{{< placeholder "EXAMPLE_USER" >}}\build-web-test-logs.jtl -e -o C:\Users\{{< placeholder "EXAMPLE_USER" >}}\build-web-test-reports\
+```{{< /tab >}}
+{{< /tabs >}}
+
+The load test begins immediately, and you should see summary output in the command-line terminal similar to the following:
+
+```output
+Creating summariser
+Created the tree successfully using ../build-web-test-plan.jmx
+Starting standalone test @ 2023 Apr 20 15:55:51 CDT (1682024151151)
+Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
+Warning: Nashorn engine is planned to be removed from a future JDK release
+summary = 500 in 00:00:11 = 44.2/s Avg: 474 Min: 84 Max: 1319 Err: 0 (0.00%)
+Tidying up ... @ 2023 Apr 20 15:56:03 CDT (1682024163082)
+... end of run
+```
+
+When the load test finishes, the report is available as an HTML file located in the report directory specified in the command. Using the example above, the file is located at either `~/build-web-test-report/index.html` or `C:\Users\example-user\build-web-test-report\index.html`. Simply open the `index.html` file with a Web browser to view the report.
+
+
+
+## Conclusion
+
+This lays the basis to start using JMeter to load test web applications. The features covered above provide plenty to establish basic web application testing. However, JMeter has more features to offer. Explore them further in the JMeter documentation linked below and throughout this tutorial.
\ No newline at end of file
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-assertion.png b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-assertion.png
new file mode 100644
index 00000000000..86b5201101a
Binary files /dev/null and b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-assertion.png differ
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-http-defaults.png b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-http-defaults.png
new file mode 100644
index 00000000000..0aa2f247347
Binary files /dev/null and b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-http-defaults.png differ
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-report.png b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-report.png
new file mode 100644
index 00000000000..194990cea49
Binary files /dev/null and b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-report.png differ
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-startup.png b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-startup.png
new file mode 100644
index 00000000000..c82829097ba
Binary files /dev/null and b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-startup.png differ
diff --git a/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-thread-group.png b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-thread-group.png
new file mode 100644
index 00000000000..435015c3844
Binary files /dev/null and b/docs/guides/tools-reference/tools/load-testing-with-jmeter/jmeter-thread-group.png differ
diff --git a/docs/guides/tools-reference/tools/schedule-tasks-with-cron/index.md b/docs/guides/tools-reference/tools/schedule-tasks-with-cron/index.md
index 317f37f99eb..688a42b43c2 100644
--- a/docs/guides/tools-reference/tools/schedule-tasks-with-cron/index.md
+++ b/docs/guides/tools-reference/tools/schedule-tasks-with-cron/index.md
@@ -18,6 +18,8 @@ image: schedule-tasks-with-cron.png
Cron is a classic utility found on Linux and UNIX systems for running tasks at pre-determined times or intervals. These tasks are referred to as **Cron tasks** or **Cron jobs**. Use Cron to schedule automated updates, generate reports, check for available disk space and notify if the space is below a certain amount.
+{{< youtube "v952m13p-b4" >}}
+
## How to Use Cron and crontab - The Basics
### What is a Cron Job?
diff --git a/docs/guides/uptime/logs/how-to-use-fluentd-for-data-logging/index.md b/docs/guides/uptime/logs/how-to-use-fluentd-for-data-logging/index.md
new file mode 100644
index 00000000000..625041bfbe5
--- /dev/null
+++ b/docs/guides/uptime/logs/how-to-use-fluentd-for-data-logging/index.md
@@ -0,0 +1,328 @@
+---
+slug: how-to-use-fluentd-for-data-logging
+title: "Using Fluentd for Open Source Unified Data Logging"
+description: "Discover the power of Fluentd for data logging. This guide introduces this open source tool, provides steps to install it, and a simple example to get you started."
+authors: ["Tom Henderson"]
+contributors: ["Tom Henderson"]
+published: 2023-08-17
+modified: 2024-05-07
+keywords: ['fluentd for data logging','fluentd','open source data logging','unified logging layer','logging with json']
+license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
+external_resources:
+- '[Fluentd](https://www.fluentd.org/)'
+---
+
+Fluentd is an open source software under the umbrella of the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/projects/fluentd/) Graduated Hosted Projects. It unifies data collection across multiple sources and aggregates logs into structured JSON data for easy consumption. It's plug-in capable architecture makes it compatible with a wide-range of applications and workflows, allowing it to parse and reformat a variety of data. Plug-ins use JSON-formatted data concepts, allowing programmers can adapt specific applications as inputs or outputs by modifying existing plug-ins and their configurations.
+
+## What is a Unified Logging Layer?
+
+Fluentd takes diverse data input sources found in various application log types, parses this input, and renders a chosen uniform output stream. This data is then used by other applications and/or for uniform log archiving and further analysis. Fluentd uses *directives* that modify the flow to match expressions, control flow, and route flows.
+
+The Fluentd uniform output stream can be sent to many different application destinations. These include inputs to NoSQL and SQL databases, archival applications, and monitoring console apps. Fluentd unifies input logs and messages then outputs them in a configured, stratified stream specified by Fluentd and its plug-in configuration.
+
+Data outputs from Fluentd are handled similarly through administratively defined or standardized streams. These are set by program configuration, or a combination of Fluentd configuration and chosen plug-in options.
+
+Fluentd is capable of handling many diverse data inputs and output destinations concurrently through the use of plug-ins. Input and output data stream at different speeds and event cycles.
+
+Several instances of Fluentd can run in parallelizing schemes on different hosts for fault tolerance and continuity. Data sources external to the hosted Fluentd versions require network pathway considerations, including firewall, routing, pathway congestion, and encryption. Fluentd conversation configurations can support SSL/TLS encryption.
+
+## Fluentd Plug-Ins
+
+Input and output plug-ins are required to parse data flows through Fluentd. They are categorized by their *role*, listed below:
+
+- Input
+- Parser
+- Filter
+- Output
+- Formatter
+- Service Discovery
+- Buffer
+- Metrics
+
+Plug-ins use a naming convention associated with their role as an input or output plug-in. As an example, `in_syslog` is an input plug-in, using the `in_` prefix.
+
+The output plug-ins, prefixed with `out_`, have three different flushing and buffering modes:
+
+- **Non-Buffered**: The plug-in does not buffer data. It writes or outputs results immediately after processing.
+
+- **Synchronous Buffered**: The plug-in outputs data in chunks specified by the data value set in its configuration. When a datum is set, the plug-in sends chunks of data at a specified speed. This technique is used to prevent destination congestion.
+
+- **Asynchronous Buffered**: The plug-in stores data for later transmission.
+
+## Before You Begin
+
+1. If you have not already done so, create a Linode account and Compute Instance. See our [Getting Started with Linode](/docs/guides/getting-started/) and [Creating a Compute Instance](/docs/guides/creating-a-compute-instance/) guides. This guide focuses on Ubuntu and Debian Linux as hosts for Fluentd, although adaptations of Fluentd can be found for Windows and macOS as well.
+
+1. Follow our [Setting Up and Securing a Compute Instance](/docs/guides/set-up-and-secure/) guide to update your system and create a limited user account.
+
+1. Fluentd input and output are synchronized to a time source, and Fluentd recommends setting up a Network Time Protocol daemon prior to software installation. In cloud environments with many separated data sources, a single source of NTP synchronization is recommended. The NTP time becomes the basis for data stamping through the parsing stages that Fluentd performs.
+
+{{< note >}}
+This guide is written for a non-root user. Commands that require elevated privileges are prefixed with `sudo`. If you’re not familiar with the `sudo` command, see the [Users and Groups](/docs/tools-reference/linux-users-and-groups/) guide.
+{{< /note >}}
+
+The commands, file contents, and other instructions provided throughout this guide may include placeholders. These are typically domain names, IP addresses, usernames, passwords, and other values that are unique to you. The table below identifies these placeholders and explains what to replace them with:
+
+| Placeholder: | Replace With: |
+| -- | -- |
+| `EXAMPLE_USER` | The username of the current user on your local machine. |
+
+### Required Resources
+
+1. Check the maximum number of file descriptors:
+
+ ```command
+ ulimit -n
+ ```
+
+ ```output
+ 1024
+ ```
+
+ If the answer is the default of `1024`, an adjustment must be made to the `/etc/security/limits.conf` file.
+
+1. Open the `/etc/security/limits.conf` file using a text editor with root permissions:
+
+ ```command
+ sudo nano /etc/security/limits.conf
+ ```
+
+1. Add the following lines to the end of the file but replace `EXAMPLE_USER` with your actual username.
+
+ ```file {title="/etc/security/limits.conf"}
+ EXAMPLE_USER soft nofile 65536
+ EXAMPLE_USER hard nofile 65536
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. To admit the larger values, reload the kernel by rebooting:
+
+ ```command
+ sudo reboot
+ ```
+
+1. When the system reboots, recheck the maximum number of file descriptors:
+
+ ```command
+ ulimit -n
+ ```
+
+ ```output
+ 65536
+ ```
+
+## Installing Fluentd
+
+Fluentd is deployed as a server application. There are two versions available: Fluentd and *td-agent*. Both versions behave identically, but [there are differences](https://www.fluentd.org/faqs). Fluentd is available as a Ruby gem or source code, while td-agent offers typical packages for Linux, macOS, and Windows. These examples use the td-agent installation.
+
+1. First, launch the appropriate cURL command for your operating system and version. The command installs the app and dependencies for the chosen version.
+
+ {{< tabs >}}
+ {{< tab "Ubuntu 22.04" >}}
+ ```command
+ curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-jammy-td-agent4.sh | sh
+ ```
+ {{< /tab >}}
+ {{< tab "Ubuntu 20.04" >}}
+ ```command
+ curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-focal-td-agent4.sh | sh
+ ```
+ {{< /tab >}}
+ {{< tab "Debian 11" >}}
+ ```command
+ curl -fsSL https://toolbelt.treasuredata.com/sh/install-debian-bullseye-td-agent4.sh | sh
+ ```
+ {{< /tab >}}
+ {{< tab "Debian 10" >}}
+ ```command
+ curl -fsSL https://toolbelt.treasuredata.com/sh/install-debian-buster-td-agent4.sh | sh
+ ```
+ {{< /tab >}}
+ {{< /tabs >}}
+
+ ```output
+ Installation completed. Happy Logging!
+ ```
+
+1. Once the version-appropriate shell script is successfully executed, check to see if the service is `acitve (running)`:
+
+ ```command
+ sudo systemctl status td-agent.service
+ ```
+
+ If `active (running)`, the output should look like this:
+
+ ```output
+ ● td-agent.service - td-agent: Fluentd based data collector for Treasure Data
+ Loaded: loaded (/lib/systemd/system/td-agent.service; enabled; vendor pres>
+ Active: active (running) since Mon 2023-08-21 16:48:13 UTC; 57s ago
+ Docs: https://docs.treasuredata.com/display/public/PD/About+Treasure+Dat>
+ Main PID: 2102 (fluentd)
+ Tasks: 9 (limit: 4557)
+ Memory: 96.5M
+ CPU: 2.669s
+ CGroup: /system.slice/td-agent.service
+ ├─2102 /opt/td-agent/bin/ruby /opt/td-agent/bin/fluentd --log /var>
+ └─2105 /opt/td-agent/bin/ruby -Eascii-8bit:ascii-8bit /opt/td-agen>
+
+ Aug 21 16:48:11 localhost systemd[1]: Starting td-agent: Fluentd based data col>
+ Aug 21 16:48:13 localhost systemd[1]: Started td-agent: Fluentd based data coll
+ ```
+
+ If not, launch the daemon:
+
+ ```command
+ sudo systemctl start td-agent.service
+ ```
+
+1. In order to automatically start up when the system is rebooted, run the following command:
+
+ ```command
+ sudo systemctl enable td-agent.service
+ ```
+
+## Testing Fluentd
+
+1. Open the `/etc/td-agent/td-agent.conf` file in a text editor with root permissions:
+
+ ```command
+ sudo nano /etc/td-agent/td-agent.conf
+ ```
+
+1. Append the following configuration to the bottom of the file:
+
+ ```file {title="/etc/td-agent/td-agent.conf"}
+
+ @type stdout
+
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Restart td-agent for the appendage to take effect:
+
+ ```command
+ sudo systemctl restart td-agent
+ ```
+
+1. Once the daemon starts, test it using cURL and the REST API:
+
+ ```command
+ curl -X POST -d 'json={"json":"I’m Alive!"}' http://localhost:8888/our.test
+ ```
+
+1. Use the following command to view the result of the test:
+
+ ```command
+ tail -n 1 /var/log/td-agent/td-agent.log
+ ```
+
+ It should answer with a time stamp and the "I'm Alive!" message:
+
+ ```output
+ 2023-08-18 17:02:57.005253503 +0000 our.test: {"json":"I’m Alive!"}
+ ```
+
+## Syslog Application Example
+
+Ubuntu 20.04 LTS and 22.04 LTS Compute Instances have the remote syslog known as rsyslog pre-installed and it is used in this example. In this example, `rsyslog.conf` is modified to send log entries to the same port as the Fluentd `tg-agent` is set to listen.
+
+1. Log in to the system once it boots up.
+
+1. Open `rsyslog.conf` in a text editor with root permissions:
+
+ ```command
+ sudo nano /etc/rsyslog.conf
+ ```
+
+1. Append the following line to the bottom of the file:
+
+ ```file {title="/etc/rsyslog.conf"}
+ *.* @127.0.0.1:5440
+ ```
+
+ The above configuration line tells rsyslog to send syslog data to port `5440` of the local host.
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. After the file is saved, restart the rsyslog service:
+
+ ```command
+ sudo systemctl restart syslog
+ ```
+
+ Fluentd typically listens for messages through its plug-ins, however, in this example, the raw syslog messages are monitored, unfiltered, and unmodified. The td-agent file must be modified to make Fluentd listen for syslog-formatted data. Continue the above example of an input source as syslog at port `5440`.
+
+1. Open `td-agent.conf` in a text editor with root permissions:
+
+ ```command
+ sudo nano /etc/td-agent/td-agent.conf
+ ```
+
+1. Append the following lines to the bottom of the file:
+
+ ```file {title="/etc/td-agent/td-agent.conf"}
+
+ @type syslog
+ port 5440
+ tag system
+
+
+
+ @type stdout
+
+ ```
+
+1. When done, press CTRL+X, followed by Y then Enter to save the file and exit `nano`.
+
+1. Restart td-agent for the appendage to take effect:
+
+ ```command
+ sudo systemctl restart td-agent
+ ```
+
+1. Rsyslog now outputs to the port where td-agent listens. Use the following command to view proof of the chain:
+
+ ```command
+ tail -n 1 /var/log/td-agent/td-agent.log
+ ```
+
+ Entries from syslog are found in the td-agent.log:
+
+ ```output
+ 2023-08-21 17:26:09.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4304","message":"Connection closed by 37.129.207.106 port 42964 [preauth]"}
+ 2023-08-21 17:26:13.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4310","message":"Connection closed by 5.218.67.72 port 45500 [preauth]"}
+ 2023-08-21 17:26:13.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4308","message":"Connection closed by 83.121.149.248 port 36697 [preauth]"}
+ 2023-08-21 17:26:19.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4315","message":"Connection closed by 80.191.23.250 port 39788 [preauth]"}
+ 2023-08-21 17:26:20.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4313","message":"Connection closed by 87.248.129.189 port 51192 [preauth]"}
+ 2023-08-21 17:26:24.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4318","message":"Connection closed by 91.251.66.145 port 38470 [preauth]"}
+ 2023-08-21 17:26:25.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4320","message":"Connection closed by 37.129.101.243 port 39424 [preauth]"}
+ 2023-08-21 17:26:26.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4322","message":"Connection closed by 151.246.203.48 port 11351 [preauth]"}
+ 2023-08-21 17:26:29.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4325","message":"Connection closed by 204.18.110.253 port 43478 [preauth]"}
+ 2023-08-21 17:26:31.000000000 +0000 system.auth.info: {"host":"localhost","ident":"sshd","pid":"4327","message":"Connection closed by 5.214.204.211 port 39830 [preauth]"}
+ ```
+
+Rsyslog has an input through the unified layer of Fluentd to the log of the td-agent. This is an unfiltered output that can be sent by an output plug-in to a desired archiving program, SIEM input, or other destination.
+
+Common log sources such as syslog [can have highly tailored processing with Fluentd controls](https://docs.fluentd.org/input/syslog) applied.
+
+## Fluentd Directives
+
+In the example of an rsyslog input shown above, there is no filtration of the information. Fluentd uses a configuration file directive to manipulate data inputs. The Fluentd directives are:
+
+- **Source**: determines input sources
+- **Match**: parses for regular expression matches
+- **Filter**: determines the event directive pipeline
+- **System**: sets system-wide configuration
+- **Label**: groups the output and filters for internal routing of data
+- **Worker**: directives limit to the specific workers as an object
+- **@Include**: sources other files for inclusion
+
+Behavior is controlled by the type of plug-in(s), how records are matched (accepted, rejected based upon regular expression match), filtered, tagged, and used by workers, system directives, and other behavior specified by @include files.
+
+## Conclusion
+
+Fluentd is highly customizable via its configuration as well as the configuration of the input and output plug-ins used. The unified logging layer represented by Fluentd processing becomes the input for many application destinations. These destinations are often archives, databases, SIEM, management consoles, and other log-processing apps. Fluentd is a unified logging layer application whose scope is modified by the customization of chosen plug-ins. Multiple instances of Fluentd can be configured for fault tolerance.
+
+You should now have a basic understanding of Fluentd, along with some simple hands-on experience from the examples.
\ No newline at end of file
diff --git a/docs/guides/websites/cms/wordpress/how-to-set-up-multiple-wordpress-sites-with-lxd-containers/index.md b/docs/guides/websites/cms/wordpress/how-to-set-up-multiple-wordpress-sites-with-lxd-containers/index.md
index eb9b22d6853..6d143127e75 100644
--- a/docs/guides/websites/cms/wordpress/how-to-set-up-multiple-wordpress-sites-with-lxd-containers/index.md
+++ b/docs/guides/websites/cms/wordpress/how-to-set-up-multiple-wordpress-sites-with-lxd-containers/index.md
@@ -357,8 +357,9 @@ To finish the setup for your WordPress sites, complete the WordPress installatio
| Website | Database Name | Username | Password | Database Host | Table Prefix |
|---------|---------------|---------------|--------------|---------------|--------------|
-| https://apache1.example.com | wpApache1 | wpUserApache1 | Create a complex and unique password | db.lxd | wp_ |
-| https://nginx1.example.com | wpNginx1 | wpUserNginx1 | Create a complex and unique password | db.lxd | wp_ |
+| https://apache1.example.com | wpApache1 | wpUserApache1 | Create a complex and unique password | db.lxd | `wp_` |
+| https://nginx1.example.com | wpNginx1 | wpUserNginx1 | Create a complex and unique password | db.lxd | `wp_` |
+
{{< note >}}
The passwords that you choose during the installation wizard should be unique and different from the passwords used in the earlier [database setup section](#configure-the-database-for-each-wordpress-installation).
{{< /note >}}
diff --git a/docs/guides/websites/ecommerce/install-opencart-on-centos-7/index.md b/docs/guides/websites/ecommerce/install-opencart-on-centos-7/index.md
index bc36f1a4cba..00f810a0f6a 100644
--- a/docs/guides/websites/ecommerce/install-opencart-on-centos-7/index.md
+++ b/docs/guides/websites/ecommerce/install-opencart-on-centos-7/index.md
@@ -21,7 +21,7 @@ relations:
- distribution: CentOS 7
---
-
+
## What is OpenCart?
diff --git a/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2004/index.md b/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2004/index.md
index 29eab944ae7..90ea34b5cac 100644
--- a/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2004/index.md
+++ b/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2004/index.md
@@ -24,6 +24,8 @@ relations:
key: how-to-install-canvas
keywords:
- distribution: Ubuntu 20.04
+deprecated: true
+deprecated_link: '/docs/guides/install-canvas-lms-on-ubuntu-2204/'
---
[Canvas](https://www.instructure.com/canvas) is a popular learning management system (LMS) noteworthy for its modern design and ease of use. Canvas provides a comprehensive website for education and training courses, whether those courses are in-person, online, or a mix of the two. Moreover, Canvas is [open source](https://github.com/instructure/canvas-lms). You can freely download and install an instance on your server, giving you a higher degree of control than with a hosted LMS.
@@ -108,7 +110,7 @@ Canvas specifically requires version **2.7** of Ruby, which the default package
1. Install Ruby and its development components:
- sudo apt-get install ruby2.6 ruby2.6-dev zlib1g-dev libxml2-dev libsqlite3-dev postgresql libpq-dev libxmlsec1-dev curl make g++
+ sudo apt-get install ruby2.7 ruby2.7-dev zlib1g-dev libxml2-dev libsqlite3-dev postgresql libpq-dev libxmlsec1-dev curl make g++
1. Install Bundler, which Canvas uses for managing its Ruby libraries ("Gems"). Canvas specifically calls for version **2.1.4** of Bundler:
diff --git a/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2204/index.md b/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2204/index.md
index 5f70f642b18..6e8ced5f219 100644
--- a/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2204/index.md
+++ b/docs/guides/websites/lms/install-canvas-lms-on-ubuntu-2204/index.md
@@ -13,6 +13,11 @@ external_resources:
- '[Canvas](https://www.instructure.com/canvas)'
- '[What is Learning Management System](https://www.shareknowledge.com/blog/what-learning-management-system-and-why-do-i-need-one)'
- '[PostgreSQL Client Authentication](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html)'
+relations:
+ platform:
+ key: how-to-install-canvas
+ keywords:
+ - distribution: Ubuntu 22.04
---
[Canvas](https://www.instructure.com/canvas) is a modern open-source Learning Management System (LMS) by Instructure, Inc. that helps makes distance learning possible. An LMS like Canvas is a software application or web-based technology that you use to plan, implement, and assess a specific learning process. This guide helps you install all of its prerequisites, install Canvas LMS on Ubuntu, perform required Canvas setups, ensure your Canvas setup is secure, and then access your Canvas setup. This guide uses the **Ubuntu 22.04** distribution.
diff --git a/docs/products/platform/accounts/guides/parent-child-accounts/index.md b/docs/products/platform/accounts/guides/parent-child-accounts/index.md
index 30d9e1385e3..cb609e1e293 100644
--- a/docs/products/platform/accounts/guides/parent-child-accounts/index.md
+++ b/docs/products/platform/accounts/guides/parent-child-accounts/index.md
@@ -4,7 +4,7 @@ description: "Learn how parent and child accounts can help Akamai partners manag
published: 2024-04-23
modified: 2024-04-29
keywords: ["akamai partners", "parent", "child", "parent/child relationship"]
-aliases: ['products/platform/accounts/guides/parent-child-accounts/']
+aliases: ['products/platform/accounts/guides/parent-child-accounts/','/guides/parent-child-accounts/']
promo_default: false
---
diff --git a/docs/release-notes/api/v4.174.0.md b/docs/release-notes/api/v4.174.0.md
new file mode 100644
index 00000000000..808b5469d94
--- /dev/null
+++ b/docs/release-notes/api/v4.174.0.md
@@ -0,0 +1,23 @@
+---
+title: API v4.174.0
+date: 2024-04-17
+version: 4.174.0
+---
+
+### Added
+
+Included new endpoints for VPC-related IP addresses:
+
+- **VPC IP Addresses List** ([GET /vpc/ips](/docs/api/vpcs/#vpc-ip-addresses-list))
+- **VPC IP Addresses View** ([GET /vpc/{id}/ips](/docs/api/vpcs/#vpc-ip-addresses-view))
+
+### Updated
+
+- **Networking Information List** ([GET /linode/instances/{linodeId}/ips](/docs/api/linode-instances/#networking-information-list)). Added the `vpc` array that includes all VPC IP addresses for a specified Linode.
+- **Updated operations that allow you to set a time to live (TTL)**. Values of 30 and 120 seconds are supported.
+
+### Fixed
+
+- Removed message for IPv6 beta support with VPC's when creating a Linode. (IPv6 is not supported.)
+- **Disk Update** ([PUT /linode/instances/{linodeId}/disks/{diskId}](/docs/api/linode-instances/#disk-update__request-body-schema)). Only a disk's `label` can be updated with this operation.
+- Replaced references of `none` to `null` for private Images, to address APIv4's multi-language support.
\ No newline at end of file
diff --git a/docs/release-notes/api/v4.175.0.md b/docs/release-notes/api/v4.175.0.md
new file mode 100644
index 00000000000..bad5fb10649
--- /dev/null
+++ b/docs/release-notes/api/v4.175.0.md
@@ -0,0 +1,45 @@
+---
+title: API v4.175.0
+date: 2024-05-01
+version: 4.175.0
+---
+
+### Added
+
+Included new endpoints for parent-child account support:
+
+- **Child Account List** ([GET /account/child-accounts](/docs/api/account/#child-account-list))
+- **Child Account View** ([GET /account/child-accounts/{euuid}](/docs/api/account/#child-account-view))
+- **Proxy User Token Create** ([POST /account/child-accounts/{euuid}/token](/docs/api/account/#proxy-user-token-create))
+
+### Updated
+
+- Modified existing endpoints to include specifics for parent-child account support:
+
+ - **Users List** ([GET /account/users](/docs/api/account/#users-list))
+ - **User View** ([GET /account/users/{username}](/docs/api/account/#user-view))
+ - **User Create** ([POST /account/users](/docs/api/account/#user-create))
+ - **User Delete** ([DELETE /account/users/{username}](/docs/api/account/#user-delete))
+ - **User Update** ([PUT /account/users/{username}](/docs/api/account/#user-update))
+ - **Profile Update** ([PUT /profile](/docs/api/profile/#profile-update))
+ - **User's Grants View** ([GET /account/users/{username}/grants](/docs/api/account/#users-grants-view) – Added the new `child_account_access` grant.)
+ - **User's Grants Update** ([PUT /account/users/ {username}/grants](/docs/api/account/#users-grants-update))
+ - **Account Cancel** ([POST /account/cancel](/docs/api/account/#account-cancel))
+ - **Account Update** ([PUT /account](/docs/api/account/#account-update))
+ - **Personal Access Token Create** ([POST /profile/tokens](/docs/api/profile/#personal-access-token-create))
+
+- Pointed out non-availability of these billing-related endpoints for child account use:
+
+ - **Account Update** ([PUT /account](/docs/api/account/#account-update))
+ - **Credit Card Add/Edit** ([POST /account/credit-card](/docs/api/account/#credit-card-addedit))
+ - **Payment Method Add** ([POST /account/payment-methods/](/docs/api/account/#payment-method-add))
+ - **Payment Make** ([POST /account/payments/](/docs/api/account/#payment-make))
+ - **Promo Credit Add** ([POST /account/promo-codes](/docs/api/account/#promo-credit-add))
+
+- Other minor edits for formatting and compatibility.
+
+### Fixed
+
+- **Payment Make** ([POST /account/payments](/docs/api/account/#payment-make)). Removed references to CVV which is no longer required by vendor.
+- **Linode Create** ([POST /linode/instances](/docs/api/linode-instances/#linode-create)). Request body example incorrectly listed `"nat_1_1": "add"` when it's supposed to be `"nat_1_1": "any"`.
+- **NodeBalancer CLI commands**. Updated to use proper operators.
\ No newline at end of file