diff --git a/macros/serverless-jobs/automate-resources-management.mdx b/macros/serverless-jobs/automate-resources-management.mdx index 58d0fe0171..a0551377f6 100644 --- a/macros/serverless-jobs/automate-resources-management.mdx +++ b/macros/serverless-jobs/automate-resources-management.mdx @@ -49,7 +49,7 @@ Serverless Jobs are perfectly adapted for these autonomous tasks, as we do not n 9. In the **Execution** tab, define the desired Scaleway CLI command, as shown in the examples below: - **Power Instances on and off** - ```sh + ```bash # Power on /scw instance server start 11111111-1111-1111-1111-111111111111 @@ -57,11 +57,11 @@ Serverless Jobs are perfectly adapted for these autonomous tasks, as we do not n /scw instance server stop 11111111-1111-1111-1111-111111111111 ``` - **Create a snapshot of an Instance volume** - ```sh + ```bash /scw instance snapshot create volume-id=11111111-1111-1111-1111-111111111111 ``` - **Create a backup of an Instance** - ```sh + ```bash /scw instance server backup 11111111-1111-1111-1111-111111111111 ``` 10. Click **Create job**. diff --git a/pages/apple-silicon/how-to/connect-to-mac-mini-ssh.mdx b/pages/apple-silicon/how-to/connect-to-mac-mini-ssh.mdx index 17c5a9987d..31226333e2 100644 --- a/pages/apple-silicon/how-to/connect-to-mac-mini-ssh.mdx +++ b/pages/apple-silicon/how-to/connect-to-mac-mini-ssh.mdx @@ -39,7 +39,7 @@ You can connect directly to the terminal of your Mac mini using the SSH protocol 4. Open your terminal application and use the SSH command provided on the **Overview** page to connect. - The SSH command will be in the format: - ```sh + ```bash ssh your_mac_mini_username@ ``` - Replace `` with your Mac mini username. diff --git a/pages/apple-silicon/how-to/update-os-mac-mini.mdx b/pages/apple-silicon/how-to/update-os-mac-mini.mdx index 996d274d6a..bdbd2804e6 100644 --- a/pages/apple-silicon/how-to/update-os-mac-mini.mdx +++ b/pages/apple-silicon/how-to/update-os-mac-mini.mdx @@ -59,16 +59,16 @@ The recommended method to update the macOS is to reinstall your Mac mini with an To manually update the operating system using the `softwareupdate` tool, follow these steps: 1. List all available updates: - ```sh + ```bash softwareupdate --list ``` 2. Install all available updates: - ```sh + ```bash sudo softwareupdate --install ``` If you want to upgrade selected packages only, use the following command: - ```sh + ```bash softwareupdate --install package-name ``` diff --git a/pages/apple-silicon/quickstart.mdx b/pages/apple-silicon/quickstart.mdx index b1a4f4e439..f6d1424e00 100644 --- a/pages/apple-silicon/quickstart.mdx +++ b/pages/apple-silicon/quickstart.mdx @@ -70,7 +70,7 @@ Refer to our detailed documentation for Windows, Linux, and macOS for OS specifi 3. Click the name of the Mac mini you want to connect to. The **Overview** page for your selected Mac mini displays. 4. Open your terminal application and use the SSH command provided on the **Overview** page to connect. - The SSH command will be in the format: - ```sh + ```bash ssh your_mac_mini_username@ ``` - Replace `` with your Mac mini username. diff --git a/pages/apple-silicon/troubleshooting/cant-connect-using-ssh.mdx b/pages/apple-silicon/troubleshooting/cant-connect-using-ssh.mdx index b7ffacf0db..cf4577c334 100644 --- a/pages/apple-silicon/troubleshooting/cant-connect-using-ssh.mdx +++ b/pages/apple-silicon/troubleshooting/cant-connect-using-ssh.mdx @@ -50,7 +50,7 @@ Repeated failed login attempts can trigger Scaleway’s security mechanisms, blo #### Attempt SSH connection again After the reboot, attempt to reconnect using: -```sh +```bash ssh -i /path/to/your/private_key user@ ``` Replace `/path/to/your/private_key` with your actual private key location and `` with your Mac mini’s IP address. diff --git a/pages/apple-silicon/troubleshooting/cant-connect-using-vnc.mdx b/pages/apple-silicon/troubleshooting/cant-connect-using-vnc.mdx index c21a9f7648..f9bf10b604 100644 --- a/pages/apple-silicon/troubleshooting/cant-connect-using-vnc.mdx +++ b/pages/apple-silicon/troubleshooting/cant-connect-using-vnc.mdx @@ -36,7 +36,7 @@ You are unable to establish a remote desktop (VNC) connection to your Scaleway M #### Verify the server status Run the following command in a terminal: -```sh +```bash ping -c 5 ``` If `ping` fails: @@ -49,7 +49,7 @@ If `ping` fails: #### Verify the VNC connection Run the following command: -```sh +```bash nc -zv ``` If the connection fails: @@ -59,7 +59,7 @@ If the connection fails: #### Verify the SSH server response Run the command: -```sh +```bash nc -zv 22 ``` If the connection fails, [reboot the server](/apple-silicon/how-to/reboot-mac-mini/). @@ -84,26 +84,26 @@ If all else fails, [reinstall macOS](/apple-silicon/how-to/reinstall-mac-mini/) ### Enabling and configuring Packet Filter (pf) 1. Open the pf configuration file in a text editor to restrict access to screen sharing: - ```sh + ```bash sudo nano /etc/pf.conf ``` 2. Add the following lines to the file and save it: - ```sh + ```bash block in on en0 proto tcp from any to any port 5900 pass in on en0 proto tcp from to any port 5900 ``` 3. Apply the configuration: - ```sh + ```bash sudo pfctl -f /etc/pf.conf ``` ### Restarting screen sharing via SSH 1. Connect via SSH: - ```sh + ```bash ssh your_mac_mini_username@ ``` 2. Restart screen sharing: - ```sh + ```bash sudo killall screensharingd ``` diff --git a/pages/container-registry/troubleshooting/common-errors-when-pushing-images-to-container-registry.mdx b/pages/container-registry/troubleshooting/common-errors-when-pushing-images-to-container-registry.mdx index 03511c5359..bef569cc50 100644 --- a/pages/container-registry/troubleshooting/common-errors-when-pushing-images-to-container-registry.mdx +++ b/pages/container-registry/troubleshooting/common-errors-when-pushing-images-to-container-registry.mdx @@ -30,18 +30,18 @@ You are unable to push images to Scaleway's Container Registry. ### Review error messages and logs - Run the push command with verbose output for more details: - ```sh + ```bash docker push --debug ``` - Check the Docker client logs for warnings or errors: - ```sh + ```bash docker logs ``` - Verify Scaleway’s Container Registry [status](https://status.scaleway.com/) for any ongoing incidents. ### Check Docker configuration and version - Ensure your Docker client is up-to-date by running: - ```sh + ```bash docker --version ``` - Check for configuration errors in the Docker daemon and client. @@ -58,7 +58,7 @@ You are unable to push images to Scaleway's Container Registry. ## Additional troubleshooting steps - Try pushing a small test image to verify general functionality: - ```sh + ```bash docker pull alpine && docker tag alpine /test-image && docker push /test-image ``` - Review Docker’s official [troubleshooting guides](https://docs.docker.com/tags/troubleshooting/) for further insights. diff --git a/pages/data-warehouse/how-to/connect-applications.mdx b/pages/data-warehouse/how-to/connect-applications.mdx index eacf959105..cde63f2949 100644 --- a/pages/data-warehouse/how-to/connect-applications.mdx +++ b/pages/data-warehouse/how-to/connect-applications.mdx @@ -40,7 +40,7 @@ To connect your deployment with BI tools, refer to the [dedicated documentation] - ```sh + ```bash clickhouse client \ --host .dtwh..scw.cloud \ --port 9440 \ @@ -50,7 +50,7 @@ To connect your deployment with BI tools, refer to the [dedicated documentation] ``` - ```sh + ```bash mysql -h .dtwh..scw.cloud \ -P 9004 \ -u scwadmin \ @@ -61,7 +61,7 @@ To connect your deployment with BI tools, refer to the [dedicated documentation] - ```sh + ```bash echo 'SELECT 1' | curl 'https://scwadmin:@.dtwh..scw.cloud:8443' -d @- ``` diff --git a/pages/data-warehouse/how-to/import-data.mdx b/pages/data-warehouse/how-to/import-data.mdx index 560461e8a5..bd154c0294 100644 --- a/pages/data-warehouse/how-to/import-data.mdx +++ b/pages/data-warehouse/how-to/import-data.mdx @@ -34,7 +34,7 @@ Scaleway Data Warehouse for ClickHouse® allows you to quickly import any type o 5. In a terminal, paste and execute the copied command to connect to your deployment. Make sure to replace the placeholders with the corresponding values. - ```sh + ```bash clickhouse client \ --host \ --port 9440 \ @@ -71,7 +71,7 @@ The `s3` Storage Engine creates a table that points to a data table stored in an The `clickhouse-client` executes an INSERT query to populate a table in your deployment by specifying the URL of the source Object Storage bucket and the file format. - ```sh + ```bash clickhouse-client --query="INSERT INTO your_table FORMAT CSVWithNames" \ --url "https://my-bucket.s3.scaleway.com/data/my_data.csv" \ --input_format_with_names=1 diff --git a/pages/data-warehouse/quickstart.mdx b/pages/data-warehouse/quickstart.mdx index 48471bac79..9bcc2c4960 100644 --- a/pages/data-warehouse/quickstart.mdx +++ b/pages/data-warehouse/quickstart.mdx @@ -68,7 +68,7 @@ You can now execute SQL commands. 3. Copy the ClickHouse® CLI command, and execute it in a terminal to connect to your deployment. Make sure to replace the placeholders beforehand. - ```sh + ```bash clickhouse client \ --host .dtwh..scw.cloud \ --port 9440 \ diff --git a/pages/dedibox-ip-failover/how-to/configure-network-virtual-machine.mdx b/pages/dedibox-ip-failover/how-to/configure-network-virtual-machine.mdx index 19cf9aa92a..12988799c6 100644 --- a/pages/dedibox-ip-failover/how-to/configure-network-virtual-machine.mdx +++ b/pages/dedibox-ip-failover/how-to/configure-network-virtual-machine.mdx @@ -33,7 +33,7 @@ Find below examples of network interface configurations on different distributio Since the release of version 18.04 (Bionic Beaver), Ubuntu has used Netplan for configuring network interfaces. For older releases, refer to the Debian configuration. 1. Log into your virtual machine and open the network configuration file `/etc/netplan/01-netcfg.yaml` in a text editor of your choice: - ```sh + ```bash sudo nano /etc/netplan/01-netcfg.yaml ``` 2. Create a network configuration as follows. Replace `` with your failover IP address: @@ -55,14 +55,14 @@ Since the release of version 18.04 (Bionic Beaver), Ubuntu has used Netplan for ``` 3. Save the file and exit the text editor. 4. Apply the new configuration: - ```sh + ```bash sudo netplan apply ``` ## Debian 1. Log into the virtual machine and edit the network configuration file: - ```sh + ```bash sudo nano /etc/network/interfaces ``` 2. Configure the network interface as follows. Replace `` with your failover IP address: @@ -76,7 +76,7 @@ Since the release of version 18.04 (Bionic Beaver), Ubuntu has used Netplan for ``` 3. Save the file and exit the text editor. 4. Set the DNS server information: - ```sh + ```bash sudo nano /etc/resolv.conf ``` 5. Add the following DNS resolvers: @@ -85,18 +85,18 @@ Since the release of version 18.04 (Bionic Beaver), Ubuntu has used Netplan for nameserver 51.159.47.26 ``` 6. Activate the network on your virtual machine: - ```sh + ```bash sudo ifup eth0 ``` *Alternatively, you can restart networking with:* - ```sh + ```bash sudo systemctl restart networking ``` ## CentOS 1. Log into the virtual machine and edit the network configuration file: - ```sh + ```bash sudo nano /etc/sysconfig/network-scripts/ifcfg-eth0 ``` 2. Configure the network interface as follows. Replace `` with your failover IP address and `` with the virtual MAC of the VM: @@ -118,7 +118,7 @@ Since the release of version 18.04 (Bionic Beaver), Ubuntu has used Netplan for ``` 3. Save and close the text editor. 4. Create or edit the routing configuration file: - ```sh + ```bash sudo nano /etc/sysconfig/network-scripts/route-eth0 ``` Add the following lines: @@ -127,7 +127,7 @@ Since the release of version 18.04 (Bionic Beaver), Ubuntu has used Netplan for default via 62.210.0.1 dev eth0 ``` 5. Activate the network interface: - ```sh + ```bash sudo ifup eth0 ``` diff --git a/pages/dedibox-ip-failover/how-to/configure-reverse-dns.mdx b/pages/dedibox-ip-failover/how-to/configure-reverse-dns.mdx index 60effe6b41..71c0ba3713 100644 --- a/pages/dedibox-ip-failover/how-to/configure-reverse-dns.mdx +++ b/pages/dedibox-ip-failover/how-to/configure-reverse-dns.mdx @@ -39,11 +39,11 @@ You can add failover IP addresses to each server based on your offer and service Reverse DNS updates may take some time to propagate. You can verify changes using the following commands: - Linux/macOS: - ```sh + ```bash dig -x ``` - Windows: - ```sh + ```bash nslookup ``` diff --git a/pages/dedibox/how-to/use-dedibackup-ftp-backup.mdx b/pages/dedibox/how-to/use-dedibackup-ftp-backup.mdx index e54deef63d..3f025dd53b 100644 --- a/pages/dedibox/how-to/use-dedibackup-ftp-backup.mdx +++ b/pages/dedibox/how-to/use-dedibackup-ftp-backup.mdx @@ -118,7 +118,7 @@ To connect to the Dedibackup service, we recommend `lftp` for interactive use an #### Example of interactive connection with lftp For interactive sessions, use `lftp` with the following command: -```sh +```bash apt install lftp # Requirement FTP_HOST="ftp://dedibackup-dc3.online.net" @@ -147,7 +147,7 @@ EOF #### Example of automated connection with curl When automating tasks, you can use `curl`, though command limitations may apply: -```sh +```bash # Upload a file curl -T "path_to_your_file.7z" -u "sd-XXXXX:your_password" ftp://dedibackup-dc3.online.net/ ``` diff --git a/pages/edge-services/reference-content/ssl-tls-certificate.mdx b/pages/edge-services/reference-content/ssl-tls-certificate.mdx index 6c63228df4..75f96820c3 100644 --- a/pages/edge-services/reference-content/ssl-tls-certificate.mdx +++ b/pages/edge-services/reference-content/ssl-tls-certificate.mdx @@ -122,7 +122,7 @@ openssl x509 -in cert.cer -out cert.pem When you have your key, your server certificate and your root certificate all in separate files, you can use the `cat` command to chain them together into one file, ready to be copied and pasted: -```sh +```bash cat private_key.pem cert.pem root_cert.pem > cert_chain.pem ``` diff --git a/pages/elastic-metal/api-cli/elastic-metal-with-api.mdx b/pages/elastic-metal/api-cli/elastic-metal-with-api.mdx index 0a114cfef1..7243940fb8 100644 --- a/pages/elastic-metal/api-cli/elastic-metal-with-api.mdx +++ b/pages/elastic-metal/api-cli/elastic-metal-with-api.mdx @@ -31,13 +31,13 @@ Besides creating your Elastic Metal servers from the graphical [Scaleway console 1. Open a terminal on your computer and set your secret API key, your SSH key ID, and your Project ID as variables. - ```sh + ```bash export SCW_SECRET_KEY="" export SCW_SSH_KEY="" export SCW_PROJECT_ID="" ``` 2. Retrieve a list of all operating systems available in the desired Availability Zone. - ```sh + ```bash curl https://api.scaleway.com/baremetal/v1/zones/fr-par-2/offers -H "X-Auth-Token: $SCW_SECRET_KEY" | jq . | grep "EM-" ``` @@ -157,7 +157,7 @@ Besides creating your Elastic Metal servers from the graphical [Scaleway console } ``` 5. The server is being delivered to your account and is automatically being installed on the operating system chosen. You can retrieve the status of the installation using the following API call: - ```sh + ```bash curl https://api.scaleway.com/baremetal/v1/zones/fr-par-2/servers/{server_id} -H "X-Auth-Token: $SCW_SECRET_KEY" | jq . ``` diff --git a/pages/elastic-metal/api-cli/elastic-metal-with-cli.mdx b/pages/elastic-metal/api-cli/elastic-metal-with-cli.mdx index 7a1901f0be..21a68382f2 100644 --- a/pages/elastic-metal/api-cli/elastic-metal-with-cli.mdx +++ b/pages/elastic-metal/api-cli/elastic-metal-with-cli.mdx @@ -33,7 +33,7 @@ The [Scaleway Command Line Interface (CLI)](https://github.com/scaleway/scaleway 1. Type the following command in your terminal to create your Elastic Metal server: - ```sh + ```bash scw baremetal server create name=name-of-your-server type=EM-A210R-SATA zone=fr-par-2 ``` @@ -75,7 +75,7 @@ The [Scaleway Command Line Interface (CLI)](https://github.com/scaleway/scaleway 1. Type the following command in your terminal to see a list of available OSes: - ```sh + ```bash scw baremetal os list zone=fr-par-2 ``` @@ -94,7 +94,7 @@ The [Scaleway Command Line Interface (CLI)](https://github.com/scaleway/scaleway ``` 2. Write down the ID of the OS you want to install. 3. Type the following command to display the list of your SSH key's ID: - ```sh + ```bash scw iam ssh-key list ``` @@ -106,7 +106,7 @@ The [Scaleway Command Line Interface (CLI)](https://github.com/scaleway/scaleway ``` 4. Write down your SSH key ID, as you will need it in the next steps. 5. Type the following command to install an OS on your Elastic Metal server: - ```sh + ```bash scw baremetal server install ID-of-your-elastic-metal-server os-id=ID-of-OS-you-want-to-install hostname=hostname-for-your-server ssh-key-ids.0=your-ssh-key-ID zone=fr-par-2 ``` @@ -164,7 +164,7 @@ There are many other functionalities you can access for your Elastic Metal serve Type the following command in your terminal: -```sh +```bash scw baremetal server start your-elastic-metal-server-ID zone=fr-par-2 ``` The following output displays, and you will see "starting" next to the `Status` field: @@ -216,7 +216,7 @@ PingStatus up Type the following command in your terminal: -```sh +```bash scw baremetal server stop your-elastic-metal-server-ID zone=fr-par-2 ``` @@ -264,7 +264,7 @@ PingStatus up Type the following command: -```sh +```bash scw baremetal server reboot your-elastic-metal-server-ID zone=fr-par-2 ``` @@ -316,7 +316,7 @@ PingStatus down Type the following command in your terminal: -```sh +```bash scw baremetal server delete your-elastic-metal-server-ID zone=fr-par-2 ``` @@ -352,7 +352,7 @@ PingStatus down Enter the following command to make sure that your server has been deleted: -```sh +```bash scw baremetal server list zone=fr-par-2 ``` diff --git a/pages/elastic-metal/how-to/configure-flexible-ip.mdx b/pages/elastic-metal/how-to/configure-flexible-ip.mdx index 85eae5fce6..2eab195d02 100644 --- a/pages/elastic-metal/how-to/configure-flexible-ip.mdx +++ b/pages/elastic-metal/how-to/configure-flexible-ip.mdx @@ -65,7 +65,7 @@ Your server now responds on both the primary IP address and the flexible IP addr 1. Log into your server using SSH with a user having super-user rights. 2. Open the file `/etc/network/interfaces` with superuser rights in your favorite text editor and configure the networking for your machine. 3. Edit the file and add the flexible IP as shown in the following example: - ```sh + ```bash # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface @@ -90,7 +90,7 @@ Your server now responds on both the primary IP address and the flexible IP addr 4. Save the file and quit the editor. 5. Bring the interface up using the `ifup` command: - ```sh + ```bash ifup eth0:0 ``` @@ -100,11 +100,11 @@ Your server now responds on both the primary IP address and the flexible IP addr 1. Log into your server using SSH using the `root` user. 2. Copy the default network configuration file to create an alias: - ```sh + ```bash cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0:0 ``` 3. Open the file `/etc/sysconfig/network-scripts/ifcfg-eth0:0` in your favorite text editor and modify it as in the following example: - ```sh + ```bash DEVICE="eth0:0" BOOTPROTO=static IPADDR="my_flexible_ip" diff --git a/pages/elastic-metal/how-to/configure-ipv6-hypervisor.mdx b/pages/elastic-metal/how-to/configure-ipv6-hypervisor.mdx index cf3144ddba..95c23ac79f 100644 --- a/pages/elastic-metal/how-to/configure-ipv6-hypervisor.mdx +++ b/pages/elastic-metal/how-to/configure-ipv6-hypervisor.mdx @@ -29,12 +29,12 @@ This guide covers the steps for configuring the network interfaces on different 1. Log into the virtual machine using SSH. 2. Identify the network interface: - ```sh + ```bash ip a ``` Take note of the interface name (e.g., `ens18`). 3. Find the IPv6 gateway: - ```sh + ```bash ip -6 route ``` Look for the line specifying the default route: @@ -46,7 +46,7 @@ This guide covers the steps for configuring the network interfaces on different ## Ubuntu - Configuration with Netplan 1. Open the Netplan configuration file: - ```sh + ```bash sudo nano /etc/netplan/01-netcfg.yaml ``` 2. Configure the network settings: @@ -76,14 +76,14 @@ This guide covers the steps for configuring the network interfaces on different ``` Replace the placeholders with actual values. 3. Apply the configuration: - ```sh + ```bash sudo netplan apply ``` ## Debian 1. Edit the network interfaces file: - ```sh + ```bash sudo nano /etc/network/interfaces ``` 2. Configure the network interface: @@ -99,7 +99,7 @@ This guide covers the steps for configuring the network interfaces on different gateway LINK_LOCAL_IPv6_GATEWAY ``` 3. Set the DNS resolver: - ```sh + ```bash sudo nano /etc/resolv.conf ``` 4. Add the following lines: @@ -108,14 +108,14 @@ This guide covers the steps for configuring the network interfaces on different nameserver 51.159.47.26 ``` 5. Activate the network configuration: - ```sh + ```bash sudo ifup ens18 ``` ## CentOS 1. Edit the network script file: - ```sh + ```bash sudo nano /etc/sysconfig/network-scripts/ifcfg-ens18 ``` 2. Configure the network interface: @@ -137,7 +137,7 @@ This guide covers the steps for configuring the network interfaces on different HWADDR=virtual:mac:address ``` 3. Enable the network interface: - ```sh + ```bash sudo ifup ens18 ``` @@ -146,31 +146,31 @@ This guide covers the steps for configuring the network interfaces on different If your IPv6 configuration does not work, try the following: 1. Check the interface configuration: - ```sh + ```bash ip a ip route ip -6 route ``` 2. Run a ping test: - ```sh + ```bash ping -6 google.com ``` 3. Verify DNS resolution: - ```sh + ```bash dig google.com dig -6 google.com ``` 4. Check firewall settings: - ```sh + ```bash sudo iptables -L -v -n sudo ip6tables -L -v -n ``` 5. Restart network services: - ```sh + ```bash sudo systemctl restart systemd-networkd ``` 6. Verify the link-local address: - ```sh + ```bash ip -6 addr show dev ens18 ping -6 LINK_LOCAL_IPv6_GATEWAY ``` diff --git a/pages/elastic-metal/troubleshooting/replace-failed-drive-software-raid.mdx b/pages/elastic-metal/troubleshooting/replace-failed-drive-software-raid.mdx index fe82f55635..ffa50a6040 100644 --- a/pages/elastic-metal/troubleshooting/replace-failed-drive-software-raid.mdx +++ b/pages/elastic-metal/troubleshooting/replace-failed-drive-software-raid.mdx @@ -49,7 +49,7 @@ Each Elastic Metal server uses a RAID1 configuration after installation from the An output as follows displays: - ```sh + ```bash Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active (auto-read-only) raid1 sdb3[1] sda3[0] 974869504 blocks super 1.2 [2/2] [UU] @@ -64,7 +64,7 @@ Each Elastic Metal server uses a RAID1 configuration after installation from the The faulty device is marked with `(F)`. 6. Remove the failed disk using the `mdadm --manage` command: - ```sh + ```bash mdadm --manage /dev/md0 --remove /dev/sdb2 ``` 7. Contact the technical support to replace the failed disk with a working one. diff --git a/pages/elastic-metal/troubleshooting/reset-admin-password-windows-server.mdx b/pages/elastic-metal/troubleshooting/reset-admin-password-windows-server.mdx index af8d318b84..48199a0f73 100644 --- a/pages/elastic-metal/troubleshooting/reset-admin-password-windows-server.mdx +++ b/pages/elastic-metal/troubleshooting/reset-admin-password-windows-server.mdx @@ -28,15 +28,15 @@ If you have lost this password, you can reset it via rescue mode. This guide exp 1. Reboot your Elastic Metal server into [rescue mode](/elastic-metal/how-to/use-rescue-mode/). 2. Log into rescue mode. 3. Install the packages `ntfs-3g` and `chntpw` using the APT package manager. - ```sh + ```bash sudo apt install ntfs-3g chntpw ``` 4. Mount the Windows system partition. - ```sh + ```bash sudo ntfsfix /dev/sda2 ``` 5. List the user accounts configured in Windows: - ```sh + ```bash sudo chntpw -l /mnt/Windows/System32/config/SAM ``` An output similar to the following displays: @@ -57,7 +57,7 @@ If you have lost this password, you can reset it via rescue mode. This guide exp The administrator account of your server can either be `Administrator` or `Administrateur` depending on the installation. 6. Edit the `Administrator` account by running the following command: - ```sh + ```bash sudo chntpw -u Administrator /mnt/Windows/System32/config/SAM ``` A menu displays as follows: diff --git a/pages/file-storage/how-to/mount-file-system.mdx b/pages/file-storage/how-to/mount-file-system.mdx index 39bafda50a..dd7972a8f7 100644 --- a/pages/file-storage/how-to/mount-file-system.mdx +++ b/pages/file-storage/how-to/mount-file-system.mdx @@ -27,7 +27,7 @@ This page explains how to mount a file system to one or several Scaleway Instanc 3. From the **Mount** section of the **Overview** page, copy the mounting command: - ```sh + ```bash mount -t virtiofs /mnt ``` @@ -37,7 +37,7 @@ This page explains how to mount a file system to one or several Scaleway Instanc 5. Run the previously copied mount command to mount your file system. Make sure to specify the appropriate mount point: - ```sh + ```bash mount -t virtiofs /mnt/my_fs ``` @@ -47,13 +47,13 @@ This page explains how to mount a file system to one or several Scaleway Instanc 6. Run the following command to display the file systems of your Instance: - ```sh + ```bash df -h ``` A list of your file systems displays, containing the one you just mounted: - ```sh + ```bash Filesystem Size Used Avail Use% Mounted on tmpfs 794M 992K 793M 1% /run /dev/sda1 8.0G 2.1G 6.0G 26% / diff --git a/pages/file-storage/how-to/unmount-file-system.mdx b/pages/file-storage/how-to/unmount-file-system.mdx index c3b83b982b..803d87ffe3 100644 --- a/pages/file-storage/how-to/unmount-file-system.mdx +++ b/pages/file-storage/how-to/unmount-file-system.mdx @@ -31,7 +31,7 @@ This page explains how to unmount a file system from one or several Scaleway Ins 5. Run the following command. Make sure to replace the placeholder and mount point with the appropriate values: - ```sh + ```bash umount /mnt/my_fs ``` diff --git a/pages/file-storage/quickstart.mdx b/pages/file-storage/quickstart.mdx index 22a614eb9b..f43e242e39 100644 --- a/pages/file-storage/quickstart.mdx +++ b/pages/file-storage/quickstart.mdx @@ -88,7 +88,7 @@ To mount a file system to an Instance, you must have [attached your file system 3. From the **Mount** section of the **Overview** page, copy the mounting command: - ```sh + ```bash mount -t virtiofs /mnt ``` @@ -98,7 +98,7 @@ To mount a file system to an Instance, you must have [attached your file system 5. Run the previously copied mount command to mount your file system. Make sure to specify the appropriate mount point: - ```sh + ```bash mount -t virtiofs /mnt/my_fs ``` @@ -108,13 +108,13 @@ To mount a file system to an Instance, you must have [attached your file system 6. Run the following command to display the file systems of your Instance: - ```sh + ```bash df -h ``` A list of your file systems displays, containing the one you just mounted: - ```sh + ```bash Filesystem Size Used Avail Use% Mounted on tmpfs 794M 992K 793M 1% /run /dev/sda1 8.0G 2.1G 6.0G 26% / @@ -136,7 +136,7 @@ Your file system is now mounted and accessible from the specified mount point in 5. Run the following command. Make sure to replace the placeholder and mount point with the appropriate values: - ```sh + ```bash umount /mnt/my_fs ``` diff --git a/pages/generative-apis/quickstart.mdx b/pages/generative-apis/quickstart.mdx index 87592a1c61..274b9c5669 100644 --- a/pages/generative-apis/quickstart.mdx +++ b/pages/generative-apis/quickstart.mdx @@ -41,7 +41,7 @@ The web playground displays. To start using Generative APIs in your code, you can install the OpenAI Python SDK. Run the following command: -```sh +```bash pip install openai ``` diff --git a/pages/gpu/how-to/use-pipenv.mdx b/pages/gpu/how-to/use-pipenv.mdx index f0a7c12b8b..1634c07810 100644 --- a/pages/gpu/how-to/use-pipenv.mdx +++ b/pages/gpu/how-to/use-pipenv.mdx @@ -37,13 +37,13 @@ You can view, install, uninstall, and update packages using simple `pipenv` comm 1. View installed packages and dependencies: - ```sh + ```bash pipenv graph ``` 2. Install a new package: - ```sh + ```bash pipenv install ``` @@ -53,13 +53,13 @@ You can view, install, uninstall, and update packages using simple `pipenv` comm 3. Uninstall a package: - ```sh + ```bash pipenv uninstall ``` 4. Update a package: - ```sh + ```bash pipenv update ``` @@ -77,7 +77,7 @@ Each Pipenv virtual environment has a Pipfile that details project dependencies, 1. View Pipfile contents: - ```sh + ```bash cat Pipfile ``` @@ -91,7 +91,7 @@ Each Pipenv virtual environment has a Pipfile that details project dependencies, 2. View Pipfile.lock contents: - ```sh + ```bash cat Pipfile.lock ``` @@ -105,19 +105,19 @@ Each Pipenv virtual environment has a Pipfile that details project dependencies, 2. Exit the current virtual environment: - ```sh + ```bash exit ``` 3. Navigate to the home directory: - ```sh + ```bash cd ~ ``` 4. Create a new project directory and navigate into it: - ```sh + ```bash mkdir my-proj && cd my-proj ``` @@ -127,13 +127,13 @@ Each Pipenv virtual environment has a Pipfile that details project dependencies, 5. Create a new virtual environment and generate a Pipfile: - ```sh + ```bash pipenv install ``` 6. Activate the virtual environment: - ```sh + ```bash pipenv shell ``` diff --git a/pages/gpu/quickstart.mdx b/pages/gpu/quickstart.mdx index 4b054946cd..accec9be97 100644 --- a/pages/gpu/quickstart.mdx +++ b/pages/gpu/quickstart.mdx @@ -51,7 +51,7 @@ To access a preinstalled working environment with all your favorite Python packa 1. Choose one of our [Docker AI images](/gpu/reference-content/docker-images/) (eg Tensorflox, Pytorch, Jax) based on your needs. 2. Run the following command to launch the Docker container. In the following example, we launch a container based on the **Tensorflow** image: - ```sh + ```bash docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/tensorflow:latest /bin/bash ``` diff --git a/pages/gpu/reference-content/docker-images.mdx b/pages/gpu/reference-content/docker-images.mdx index e545cc3c60..8fb419ce37 100644 --- a/pages/gpu/reference-content/docker-images.mdx +++ b/pages/gpu/reference-content/docker-images.mdx @@ -11,7 +11,7 @@ Scaleway offers a range of ready-to-use AI Docker images. These Docker images ca You can pull the images from our Container Registry as follows: -```sh +```bash docker pull rg.fr-par.scw.cloud/scw-ai/ ``` @@ -31,7 +31,7 @@ The following commands show how to launch a container based on each of our vario ## Tensorflow -```sh +```bash docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/tensorflow:latest /bin/bash ``` @@ -39,7 +39,7 @@ The main libraries included in the Tensorflow image are Tensorflow 2, Tensorboar ## Pytorch -```sh +```bash docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/pytorch:latest /bin/bash ``` @@ -47,7 +47,7 @@ The main libraries included in the Pytorch image are Pytorch, Torch Audio, Torch ## Jax -```sh +```bash docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/jax:latest /bin/bash ``` @@ -59,7 +59,7 @@ The main libraries included in the Jax image are Jax, Numpy, Scikit-Learn, Scipy This image is built on top of the official RAPIDS Docker image, which relies on Anaconda. -```sh +```bash docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/rapids:latest /bin/bash ``` @@ -80,6 +80,6 @@ The main libraries included in the RAPIDS image are cuDF, cuML, cuGraph, cuxfilt If there are no dependency issues when building the image, the **all** image will try to include all the above-listed libraries into a single Docker image (except RAPIDS). -```sh +```bash docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/all:latest /bin/bash ``` \ No newline at end of file diff --git a/pages/instances/api-cli/creating-backups.mdx b/pages/instances/api-cli/creating-backups.mdx index 8dd15f8551..2e3c2b823f 100644 --- a/pages/instances/api-cli/creating-backups.mdx +++ b/pages/instances/api-cli/creating-backups.mdx @@ -24,7 +24,7 @@ The Backup feature is used to back up your Instance data. It creates an image of Use the following commands to create a backup of your Instance using the [Scaleway CLI](/scaleway-cli/quickstart/). - ```sh + ```bash scw instance server backup server-id zone=fr-par-1 ``` @@ -33,12 +33,12 @@ The Backup feature is used to back up your Instance data. It creates an image of By default, the name of the image is built according to the name of the server and the date. You can specify a name for the image in the request: - ```sh + ```bash scw instance server backup zone=fr-par-1 name= ``` A backup request will create an image object. You can view it using: - ```sh + ```bash scw instance image get zone=fr-par-1 ``` An image contains one snapshot for each volume of the Instance. These snapshots are visible within the image response as `root_volume` and `extra_volumes` fields. @@ -73,11 +73,11 @@ The Backup feature is used to back up your Instance data. It creates an image of To delete a backup, run the following command: - ```sh + ```bash scw instance image delete zone=fr-par1 ``` It is also recommended to remove every snapshot related to the image by running the following command for each snapshot that is no longer needed: - ```sh + ```bash scw block snapshot delete zone=fr-par-1 ``` diff --git a/pages/instances/api-cli/managing-instance-snapshot-via-cli.mdx b/pages/instances/api-cli/managing-instance-snapshot-via-cli.mdx index 75fe3e6e5f..c593563c6b 100644 --- a/pages/instances/api-cli/managing-instance-snapshot-via-cli.mdx +++ b/pages/instances/api-cli/managing-instance-snapshot-via-cli.mdx @@ -32,7 +32,7 @@ scw block snapshot create [arg=value ...] The following arguments and flags are available to customize your command: -```sh +```bash ARGS: [volume-id] UUID of the volume to snapshot [name=] Name of the snapshot diff --git a/pages/instances/api-cli/snapshot-import-export-feature.mdx b/pages/instances/api-cli/snapshot-import-export-feature.mdx index c8dec84288..7083bafb81 100644 --- a/pages/instances/api-cli/snapshot-import-export-feature.mdx +++ b/pages/instances/api-cli/snapshot-import-export-feature.mdx @@ -38,7 +38,7 @@ More information on the QCOW2 file format, and how to use it can be found in the 3. Call the `export` snapshot API endpoint to initiate the snapshot export. For example, using curl: - ```sh + ```bash curl -X POST \ -H "X-Auth-Token: $SCW_SECRET_KEY" \ -H "Content-Type: application/json" \ @@ -87,7 +87,7 @@ Call the `import` snapshot API endpoint to initiate the snapshot import. For example, using curl: -```sh +```bash curl -X POST \ -H "X-Auth-Token: $SCW_SECRET_KEY" \ -H "Content-Type: application/json" \ diff --git a/pages/instances/api-cli/using-cloud-init.mdx b/pages/instances/api-cli/using-cloud-init.mdx index c44b219156..977eb3c25f 100644 --- a/pages/instances/api-cli/using-cloud-init.mdx +++ b/pages/instances/api-cli/using-cloud-init.mdx @@ -38,13 +38,13 @@ For `user_data` to be effective, it has to be added prior to the creation of the `@/path/to/cloud-config-file` is the path of your [Cloud-Init](/instances/how-to/use-boot-modes/#how-to-use-cloud-init) configuration file. Edit it as you wish. 2. Start your Instance - ```sh + ```bash scw start {server Id} ``` Since [version 2.3.1](https://github.com/scaleway/scaleway-cli/releases/tag/v2.3.1) of the Scaleway CLI a shorter command is available: - ```sh + ```bash scw instance server create image=ubuntu_focal name=myinstance cloud-init=@/path/to/cloud-config-file ``` @@ -52,7 +52,7 @@ For `user_data` to be effective, it has to be added prior to the creation of the The command line documentation is accessible on any cloud-init installed system. -```sh +```bash % cloud-init --help usage: cloud-init [-h] [--version] [--file FILES] diff --git a/pages/instances/how-to/connect-to-instance.mdx b/pages/instances/how-to/connect-to-instance.mdx index 9ce560c00f..c3c93e9d44 100644 --- a/pages/instances/how-to/connect-to-instance.mdx +++ b/pages/instances/how-to/connect-to-instance.mdx @@ -31,7 +31,7 @@ This page shows how to connect to your Scaleway Instance via SSH. Thanks to the 1. Open a terminal program. 2. Enter the command below into the terminal. Make sure you replace `your_private_key` with the filename of your private key (often `id_ed25519`) and `your_instance_ip` with the IP address of your Instance. - ```sh + ```bash ssh -i ~/.ssh/your_private_key root@your_instance_ip ``` diff --git a/pages/instances/how-to/enable-openssh-windows.mdx b/pages/instances/how-to/enable-openssh-windows.mdx index 1e1ab36f81..3abf9323d1 100644 --- a/pages/instances/how-to/enable-openssh-windows.mdx +++ b/pages/instances/how-to/enable-openssh-windows.mdx @@ -18,7 +18,7 @@ The latest release of **Windows Server 2022** and **Windows Server 2022 Core** i Use the following CLI command to create a new Instance with OpenSSH Server enabled: -```sh +```bash scw instance server create name=win2k22-core image=windows-server-2022-core tags.0=with-ssh type=POP2-2C-8G-WIN admin-password-encryption-ssh-key-id={ssh_key_id} ``` diff --git a/pages/instances/quickstart.mdx b/pages/instances/quickstart.mdx index 2c8620b11d..d316031ab0 100644 --- a/pages/instances/quickstart.mdx +++ b/pages/instances/quickstart.mdx @@ -53,7 +53,7 @@ Scaleway [Instances](/instances/concepts/#instance) are computing units that pro 1. Open a terminal program. 2. Enter the command below into the terminal. Make sure you replace `your_private_key` with the filename of your private key (often `id_rsa`) and `your_instance_ip` with the IP address of your Instance. - ```sh + ```bash ssh -i ~/.ssh/your_private_key root@your_instance_ip ``` 3. If / when prompted, allow connection to the host by typing `yes`, then press **Enter**. diff --git a/pages/instances/reference-content/enabling-dhcp-network-configuration-windows-server-2022.mdx b/pages/instances/reference-content/enabling-dhcp-network-configuration-windows-server-2022.mdx index 15ba4cbfb0..1bb9ef9cee 100644 --- a/pages/instances/reference-content/enabling-dhcp-network-configuration-windows-server-2022.mdx +++ b/pages/instances/reference-content/enabling-dhcp-network-configuration-windows-server-2022.mdx @@ -91,7 +91,7 @@ Once Serial Console access is confirmed, configure DHCP using the Server Configu Below is an example interaction for reference: -```sh +```bash =============================================================================== Welcome to Windows Server 2022 Datacenter =============================================================================== diff --git a/pages/instances/reference-content/identify-devices.mdx b/pages/instances/reference-content/identify-devices.mdx index 8a6db13d12..f926640b05 100644 --- a/pages/instances/reference-content/identify-devices.mdx +++ b/pages/instances/reference-content/identify-devices.mdx @@ -21,7 +21,7 @@ SCSI disks have multiple attributes, such as vendor and product/model. They also The `lsblk` can be used to list SCSI devices and will show these attributes: -```sh +```bash root@test-instance:~# lsblk --scsi NAME HCTL TYPE VENDOR MODEL REV SERIAL TRAN sda 0:0:1:0 disk SCW sbs v42 volume-a5fb1cc7-70d3-457f-b4 @@ -41,7 +41,7 @@ KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SERIAL}=="?*", SYMLINK+= In the first rule, the `sdX` kernel name is matched, and the `scsi_id` command is executed. Its output will be imported into the `udev` environment for the following rules. Let's see what the command outputs: -```sh +```bash root@test-instance:~# /lib/udev/scsi_id --export --whitelisted -d /dev/sda ID_SCSI=1 ID_VENDOR=SCW @@ -56,7 +56,7 @@ ID_SERIAL_SHORT=volume-a5fb1cc7-70d3-457f-b4e0-a757997a4b33 The third and fourth rules create the symlinks properly, using these attributes. This will result in the following symlinks being created: -```sh +```bash root@test-instance:~# ls -l /dev/disk/by-id/ total 0 lrwxrwxrwx 1 root root 9 Sep 19 09:14 scsi-0SCW_sbs_volume-a5fb1cc7-70d3-457f-b4e0-a757997a4b33 -> ../../sda @@ -81,7 +81,7 @@ KERNEL=="sd*", ENV{ID_VENDOR}=="SCW", SYMLINK+="disk/scw/$env{ID_SERIAL_SHORT}" This rule will create a symlink `/dev/disk/scw/volume-` (where `uuid` is the ID of the volume) for each volume: -```sh +```bash root@test-instance:~# ls -l /dev/disk/scw/ total 0 lrwxrwxrwx 1 root root 9 Mar 7 16:18 volume-a5fb1cc7-70d3-457f-b4e0-a757997a4b33 -> ../../sda @@ -97,7 +97,7 @@ VPC Private Networks to which the Instance is connected will appear as virtio PC As all PCI devices, they can be listed with the `lspci` command: -```sh +```bash root@test-instance:~# lspci -d '::0200' 00:02.0 Ethernet controller: Red Hat, Inc. Virtio network device @@ -110,7 +110,7 @@ By itself, the output of this command is not enough to distinguish between publi More interestingly, network interfaces can be listed generically using the `ip link show` command: -```sh +```bash root@test-instance:~# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 @@ -131,7 +131,7 @@ A simple and effective way to distinguish the public network interface from the Using the JSON output mode of the `ip` command and filtering with the `jq` JSON parser, we can thus list VPC Private Network interfaces: -```sh +```bash root@test-instance:~# ip -j link | jq -r '.[] | select(.address | test("02:00:00:.*")) | .ifname' ens5 ens6 @@ -140,7 +140,7 @@ ens6 Using the MAC address of the interfaces, it is also possible to distinguish between the different VPC Private Network interfaces. The MAC address of each interface is available through the API. For example, querying `/instances/v1//servers//private_nics`, where `` is the zone of the server and `` is the ID of the Instance gives: -```sh +```bash { "private_nics": [ { diff --git a/pages/instances/reference-content/manual-configuration-private-ips.mdx b/pages/instances/reference-content/manual-configuration-private-ips.mdx index e1d02f8bd5..e2a31ca2a2 100644 --- a/pages/instances/reference-content/manual-configuration-private-ips.mdx +++ b/pages/instances/reference-content/manual-configuration-private-ips.mdx @@ -59,7 +59,7 @@ Once you have [added your Instances to a Private Network](/instances/how-to/use- 3. Find the virtual interface corresponding to the Private Network using the `ip link show` command: - ```sh + ```bash root@virtual-instance:~# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 @@ -78,7 +78,7 @@ Once you have [added your Instances to a Private Network](/instances/how-to/use- The network interface name is not guaranteed to be stable and may change across reboot or poweroff and poweron actions, similarily to the rest of the PCI hierarchy. 4. For convenience, you can give a more significant name (e.g. `priv0`) to the Private Network interface. Configure the new interface name as follows: - ```sh + ```bash root@virtual-instance:~# ip link set down dev ens5 root@virtual-instance:~# ip link set name priv0 dev ens5 root@virtual-instance:~# ip link set up dev priv0 @@ -88,7 +88,7 @@ Once you have [added your Instances to a Private Network](/instances/how-to/use- This renaming action will not persist across reboots. See below for a solution. 5. Make these changes persistent at reboot to ensure the Private Networks interface always gets the same name based on its MAC address. This is done by adding the following rule to the `/etc/udev/rules.d/75-persistent-net-generator.rules` file. Make sure that you replace the address with the correct MAC address for your case: - ```sh + ```bash SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="02:00:00:00:1a:ae", NAME="priv0" ``` @@ -149,13 +149,13 @@ Once you have brought up the Private Network via the previous steps, you can con 2. Restart the network service to bring the configured interface up: - On **CentOS 7** use the following command: - ```sh + ```bash root@virtual-instance:~# systemctl restart network.service ``` - On **CentOS 8** use the following command: - ```sh + ```bash root@virtual-instance:~# systemctl restart NetworkManager.service ``` 3. Repeat these steps on the other Instances that will communicate within the Private Network. @@ -168,7 +168,7 @@ After you followed the steps above, you can test the connection between the Inst Enter the `ping` command, pinging the relevant IP address for your Instances/Private Network. -```sh +```bash root@virtual-instance:~# ping 192.168.42.3 PING 192.168.42.3 (192.168.42.3): 56 data bytes 64 bytes from 192.168.42.3: icmp_seq=0 ttl=64 time=0.824 ms diff --git a/pages/instances/reference-content/moving-instances-between-az-and-projects.mdx b/pages/instances/reference-content/moving-instances-between-az-and-projects.mdx index 28574e6e4c..0e2181c8dd 100644 --- a/pages/instances/reference-content/moving-instances-between-az-and-projects.mdx +++ b/pages/instances/reference-content/moving-instances-between-az-and-projects.mdx @@ -101,7 +101,7 @@ scw block snapshot import-from-object-storage name= bucket= type= zone= project-id= ``` diff --git a/pages/instances/troubleshooting/fix-broken-vpn-when-switching-openvpn-vm-to-routed-ip.mdx b/pages/instances/troubleshooting/fix-broken-vpn-when-switching-openvpn-vm-to-routed-ip.mdx index 62a1cb3f59..b03c325ba8 100644 --- a/pages/instances/troubleshooting/fix-broken-vpn-when-switching-openvpn-vm-to-routed-ip.mdx +++ b/pages/instances/troubleshooting/fix-broken-vpn-when-switching-openvpn-vm-to-routed-ip.mdx @@ -22,12 +22,12 @@ You can also avoid this situation altogether by installing the package prior to 1. Add Scaleway's stable PPA -```sh +```bash add-apt-repository ppa:scaleway/stable ``` 2. Add the scaleway-ovpn-scripts package containing the new scripts -```sh +```bash apt -y install scaleway-ovpn-scripts ``` @@ -37,12 +37,12 @@ From this point on, your Instance may be safely rebooted and you will no longer 1. Add Scaleway's stable PPA -```sh +```bash add-apt-repository ppa:scaleway/stable ``` 2. Add the scaleway-ovpn-scripts package containing the new scripts -```sh +```bash apt -y install scaleway-ovpn-scripts ``` diff --git a/pages/instances/troubleshooting/fix-cloud-init-debian12.mdx b/pages/instances/troubleshooting/fix-cloud-init-debian12.mdx index 69f213a30b..382462a9a2 100644 --- a/pages/instances/troubleshooting/fix-cloud-init-debian12.mdx +++ b/pages/instances/troubleshooting/fix-cloud-init-debian12.mdx @@ -12,7 +12,7 @@ Debian 12 (Bookworm) Instances created before June 2nd, 2023 were delivered with The issue has been fixed for Instances created after June 2nd, 2023. The Debian 12 image now uses the official Debian Unstable `cloud-init` package.
Run the following commands to fix the issue on an Instance affected by this bug: -```sh +```bash wget http://ftp.fr.debian.org/debian/pool/main/c/cloud-init/cloud-init_23.2-1_all.deb -O /tmp/cloud-init_23.2-1_all.deb dpkg -i /tmp/cloud-init_23.2-1_all.deb diff --git a/pages/instances/troubleshooting/fix-dns-routed-ipv6-only-debian-bullseye.mdx b/pages/instances/troubleshooting/fix-dns-routed-ipv6-only-debian-bullseye.mdx index 22300058a5..d1be4ca875 100644 --- a/pages/instances/troubleshooting/fix-dns-routed-ipv6-only-debian-bullseye.mdx +++ b/pages/instances/troubleshooting/fix-dns-routed-ipv6-only-debian-bullseye.mdx @@ -35,7 +35,7 @@ Due to its modern nature and active maintenance, [`netplan` is a favorable optio You can check whether your Debian Bullseye Instance is concerned by running the following command, where `UUID` is the identifier of your Instance: -```sh +```bash scw -o json instance server get UUID | jq '.routed_ip_enabled and ([.public_ips[] | select(.family != "inet6")] == [])' ``` @@ -63,15 +63,15 @@ The `netplan` package must be installed **before** you apply this procedure, or ### Checking for netplan To check whether your Instance has `netplan` installed, run the following command: - ```sh + ```bash dpkg-query -W netplan.io ```` The command should return an output like this, where `` is the currently installed version of the package, meaning you can skip directly to the first step of the procedure: - ```sh + ```bash netplan.io ``` If the tool is not installed, the command will print the following: - ```sh + ```bash dpkg-query: no packages found matching netplan.io ```` In this situation, proceed with the next section to install `netplan` before applying the procedure. @@ -83,7 +83,7 @@ In this situation, proceed with the next section to install `netplan` before app
1. *(optional)* If, **and only if**, your Instance is already booted using a routed IPv6-only setup, you need to temporarily configure your DNS resolver so that it can reach the Debian repositories, in order to install `netplan`. The following uses Google's DNS server: - ```sh + ```bash > /etc/resolv.conf cat < 1. Force `cloud-init` to set up the network configuration using `netplan`. - ```sh + ```bash > /etc/cloud/cloud.cfg.d/99_scw_ip6dns.cfg cat < 4. Enable the necessary `systemd` units: - ```sh + ```bash systemctl enable systemd-networkd-wait-online.service systemd-resolved.service ``` 5. Reboot the Instance: diff --git a/pages/instances/troubleshooting/fix-unreachable-focal-with-two-public-ips.mdx b/pages/instances/troubleshooting/fix-unreachable-focal-with-two-public-ips.mdx index 0acd5e616c..9a814b0b0c 100644 --- a/pages/instances/troubleshooting/fix-unreachable-focal-with-two-public-ips.mdx +++ b/pages/instances/troubleshooting/fix-unreachable-focal-with-two-public-ips.mdx @@ -21,16 +21,16 @@ A modified `cloud-init` package named `cloud-init_24.2-0ubuntu1~20.04.1+scaleway 1. Add Scaleway's stable PPA -```sh +```bash add-apt-repository ppa:scaleway/stable ``` 2. Add the modified cloud-init package -```sh +```bash apt -y install cloud-init ``` 3. Re-initialize cloud-init to fix the netplan profile -```sh +```bash cloud-init clean cloud-init init --local cloud-init init @@ -53,7 +53,7 @@ scw instance server start 2. Once the Instance is rebooted, log into your Instance using [SSH](/instances/how-to/connect-to-instance/) and set up the environment to be able to chroot into it: -```sh +```bash cat /proc/partitions major minor #blocks name @@ -72,7 +72,7 @@ major minor #blocks name ``` Then mount the partitions and get into the `chroot`: -```sh +```bash mount /dev/vda1 /mnt mount -o bind /sys /mnt/sys mount -o bind /proc /mnt/proc @@ -81,7 +81,7 @@ mount -o bind /run /mnt/run chroot /mnt ``` 3. Fix the DNS resolution file in the chroot -```sh +```bash rm -f /etc/resolv.conf ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf ``` @@ -91,11 +91,11 @@ add-apt-repository ppa:scaleway/stable apt -y install cloud-init ``` 5. Re-initialize cloud-init to fix the netplan profile: -```sh +```bash cloud-init clean && cloud-init init --local && cloud-init init ``` 6. Get out of the `chroot` and remove the mounts: -```sh +```bash umount /mnt/sys /mnt/proc /mnt/dev /mnt/run /mnt ``` 7. Stop the Instance, switch back the Instance's `boot_type` to `local`, and reboot the Instance: @@ -112,7 +112,7 @@ Once rebooted, your Instance will be reachable again. 8. Put a hold on the newly installed cloud-init: -```sh +```bash apt-mark hold cloud-init cloud-init set on hold. ``` \ No newline at end of file diff --git a/pages/instances/troubleshooting/fix-unreachable-noble-after-reboot.mdx b/pages/instances/troubleshooting/fix-unreachable-noble-after-reboot.mdx index 5519beb6f5..6a70c1b688 100644 --- a/pages/instances/troubleshooting/fix-unreachable-noble-after-reboot.mdx +++ b/pages/instances/troubleshooting/fix-unreachable-noble-after-reboot.mdx @@ -46,7 +46,7 @@ timeout 10 Replace `` with the unique ID of your Instance, e.g. `0500ebd2-d70d-49af-a969-3ac09b6f7fff`. 2. Once the Instance is rebooted, log into your Instance using [SSH](/instances/how-to/connect-to-instance/) and set up the environment to be able to chroot into it: - ```sh + ```bash cat /proc/partitions major minor #blocks name diff --git a/pages/instances/troubleshooting/reboot-from-faulty-kernel.mdx b/pages/instances/troubleshooting/reboot-from-faulty-kernel.mdx index 628ba756a6..98d46fe3a6 100644 --- a/pages/instances/troubleshooting/reboot-from-faulty-kernel.mdx +++ b/pages/instances/troubleshooting/reboot-from-faulty-kernel.mdx @@ -19,7 +19,7 @@ import Requirements from '@macros/iam/requirements.mdx' 1. Switch the Instance's `boot-type` to `rescue` and reboot your Instance into rescue mode using the CLI-Tools: - ```sh + ```bash scw instance server update {Instance_ID} boot-type=rescue scw instance server reboot {Instance_ID} ``` @@ -27,7 +27,7 @@ import Requirements from '@macros/iam/requirements.mdx' Replace `{Instance_ID}` with the unique ID of your Instance, e.g. `0500ebd2-d70d-49af-a969-3ac09b6f7fff`. 2. Once the Instance is rebooted, log into your Instance using [SSH](/instances/how-to/connect-to-instance/) and set up the environment to be able to chroot into it: - ```sh + ```bash cat /proc/partitions major minor #blocks name @@ -45,7 +45,7 @@ import Requirements from '@macros/iam/requirements.mdx' mount -o bind /dev /mnt/dev ``` 3. Once mounted, use the `chroot` command to get into your Instances' root file system. You can then change the `GRUB_DEFAULT` value to boot using the previous kernel: - ```sh + ```bash chroot /mnt nano /etc/default/grub ``` @@ -53,7 +53,7 @@ import Requirements from '@macros/iam/requirements.mdx' In the example above, we use `nano` as text editor. Feel free to use your favorite text editor to edit the file. Change the value of `GRUB_DEFAULT` to `"1 > 2": - ```sh + ```bash # head /etc/default/grub # If you change this file, run 'update-grub' afterward to update # /boot/grub/grub.cfg. @@ -80,7 +80,7 @@ import Requirements from '@macros/iam/requirements.mdx' done ``` 5. Switch back the Instance's `boot_type` to `local` and reboot the Instance: - ```sh + ```bash scw instance server update {Instance_ID} boot-type=local scw instance server reboot {Instance_ID} ``` @@ -89,7 +89,7 @@ import Requirements from '@macros/iam/requirements.mdx' ### Examples of failed boots * In the following example, the Instance may only boot with the root file system in `read-only` mode: - ```sh + ```bash [ OK ] Finished Remove Stale Onli…ext4 Metadata Check Snapshots. [ 4.219158] cloud-init[542]: Traceback (most recent call last): [ 4.220328] cloud-init[542]: File "/usr/bin/cloud-init", line 33, in diff --git a/pages/kubernetes/how-to/recover-space-etcd.mdx b/pages/kubernetes/how-to/recover-space-etcd.mdx index 9f594cab2f..456b89094e 100644 --- a/pages/kubernetes/how-to/recover-space-etcd.mdx +++ b/pages/kubernetes/how-to/recover-space-etcd.mdx @@ -20,7 +20,7 @@ This guide helps you to free up space on your database to avoid reaching this li * Dump your cluster resources to YAML format and show the characters count, you will have a rough estimation where to look for space to claim -```sh +```bash > kubectl api-resources --verbs=list --namespaced -o name | while read type; do echo -n "Kind: ${type}, Size: "; kubectl get $type -o yaml -A | wc -c; done Kind: configmaps, Size: 1386841 Kind: endpoints, Size: 82063 @@ -33,13 +33,13 @@ Kind: pods, Size: 3326153 * Looking for unused resources is a good approach, delete any Secrets, large ConfigMaps that are not used anymore in your cluster. - ```sh + ```bash > kubectl -n $namespace delete $ConfigMapName ``` * keep an eye on Helm Charts that are deploying a lot of custom resources (CRDs), they tend to fill up etcd space. You can find them by showing resource kinds - ```sh + ```bash > kubectl api-resources NAME SHORTNAMES APIVERSION NAMESPACED KIND configmaps cm v1 true ConfigMap @@ -57,7 +57,7 @@ Look for resources with an external apiversion (not _v1_, _apps/v1_, _storage.k8 * If you have a doubt on space taken by a resource, you can dump it to get its size - ```sh + ```bash > kubectl get nodefeature -n kube-system $node-feature-name -o yaml | wc -c 305545 // ~300KiB, big object ``` diff --git a/pages/kubernetes/how-to/upgrade-kubernetes-version.mdx b/pages/kubernetes/how-to/upgrade-kubernetes-version.mdx index c0a1c61a58..710008db78 100644 --- a/pages/kubernetes/how-to/upgrade-kubernetes-version.mdx +++ b/pages/kubernetes/how-to/upgrade-kubernetes-version.mdx @@ -59,7 +59,7 @@ From here, two options are available: you are either upgrading **one minor versi #### One minor version This option is the most straightforward and requires you to first upgrade your control plane. -```sh +```bash scw k8s cluster upgrade $CLUSTER_ID version=$NEW_K8S_VERSION ``` @@ -68,7 +68,7 @@ scw k8s cluster upgrade $CLUSTER_ID version=$NEW_K8S_VERSION Additionally, you can upgrade one pool independently by running the following command: -```sh +```bash scw k8s pool upgrade $POOL_ID version=$NEW_K8S_VERSION ``` @@ -76,7 +76,7 @@ If you wish to migrate your workload manually, you can do so by following the st Make sure to adapt the pool creation step. -```sh +```bash scw k8s pool create zone=$OLD_POOL_ZONE size=$SIZE_OF_YOUR_OLD_POOL version=$NEW_CLUSTER_VERSION cluster-id=$CLUSTER_ID ``` diff --git a/pages/kubernetes/reference-content/managing-load-balancer-ips.mdx b/pages/kubernetes/reference-content/managing-load-balancer-ips.mdx index 78065cd260..a55bfdb9bb 100644 --- a/pages/kubernetes/reference-content/managing-load-balancer-ips.mdx +++ b/pages/kubernetes/reference-content/managing-load-balancer-ips.mdx @@ -33,7 +33,7 @@ Load Balancer flexible IPs have the following limitations: Ensure that you have created an [API key](/iam/how-to/create-api-keys/) and run this call: -```sh +```bash curl -X POST \ -H "X-Auth-Token: $SCW_SECRET_KEY" \ -H "Content-Type: application/json" \ @@ -109,7 +109,7 @@ In the example below, we will: These steps show how we can use a reserved IP on Load Balancer creation and then “move” this IP from one service to another. -```sh +```bash # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coffee-svc ClusterIP 10.32.102.89 80/TCP 9s diff --git a/pages/managed-databases-for-postgresql-and-mysql/api-cli/import-data-to-managed-postgresql-databases.mdx b/pages/managed-databases-for-postgresql-and-mysql/api-cli/import-data-to-managed-postgresql-databases.mdx index 2e2f9f0709..64633d933a 100644 --- a/pages/managed-databases-for-postgresql-and-mysql/api-cli/import-data-to-managed-postgresql-databases.mdx +++ b/pages/managed-databases-for-postgresql-and-mysql/api-cli/import-data-to-managed-postgresql-databases.mdx @@ -298,19 +298,19 @@ To complete the following procedure, you must have: 1. In a terminal, run the command below to initialize a Meltano project named `migrate-postgresql`: - ```sh + ```bash meltano init migrate-postgresql ``` 2. Run the following command to access the newly created directory that contains the project: - ```sh + ```bash cd migrate-postgresql ``` 3. Run the following command to add a PostgreSQL-compatible extractor and loader: - ```sh + ```bash meltano add extractor tap-postgres meltano add loader target-postgres ``` @@ -319,7 +319,7 @@ To complete the following procedure, you must have: 1. Run the command below to configure the connection to your existing database: - ```sh + ```bash meltano config tap-postgres set --interactive ``` @@ -350,7 +350,7 @@ To complete the following procedure, you must have: 1. Run the command below to configure the connection to your existing database: - ```sh + ```bash meltano config tap-postgres set --interactive ``` @@ -372,7 +372,7 @@ To complete the following procedure, you must have: 1. Run the following command to execute the data import and loading: - ```sh + ```bash meltano run tap-postgres target-postgres ``` @@ -438,7 +438,7 @@ You can create a `.csv` file from an existing PostgreSQL table with the [COPY TO 2. Connect to your Managed Database for PostgreSQL using `psql`: - ```sh + ```bash psql "postgresql://{username}:{password}@{host}:5432/{databasename}?sslmode=require" ``` diff --git a/pages/managed-databases-for-postgresql-and-mysql/troubleshooting/database-instance-connectivity-issues.mdx b/pages/managed-databases-for-postgresql-and-mysql/troubleshooting/database-instance-connectivity-issues.mdx index 654896be8d..644678be5e 100644 --- a/pages/managed-databases-for-postgresql-and-mysql/troubleshooting/database-instance-connectivity-issues.mdx +++ b/pages/managed-databases-for-postgresql-and-mysql/troubleshooting/database-instance-connectivity-issues.mdx @@ -60,7 +60,7 @@ You can carry out the following actions: `rdb_troubleshoot.sh`: -```sh +```bash #!/bin/bash set -o nounset diff --git a/pages/managed-databases-for-redis/troubleshooting/database-instance-connectivity-issues.mdx b/pages/managed-databases-for-redis/troubleshooting/database-instance-connectivity-issues.mdx index 2b7c7c4f74..090f1fe4e2 100644 --- a/pages/managed-databases-for-redis/troubleshooting/database-instance-connectivity-issues.mdx +++ b/pages/managed-databases-for-redis/troubleshooting/database-instance-connectivity-issues.mdx @@ -60,7 +60,7 @@ You can carry out the following actions: `redis_troubleshoot.sh`: -```sh +```bash #!/bin/bash set -o nounset diff --git a/pages/managed-mongodb-databases/how-to/connect-database-instance.mdx b/pages/managed-mongodb-databases/how-to/connect-database-instance.mdx index 2d17bf5bc9..3f200c05c2 100644 --- a/pages/managed-mongodb-databases/how-to/connect-database-instance.mdx +++ b/pages/managed-mongodb-databases/how-to/connect-database-instance.mdx @@ -34,7 +34,7 @@ Find below a detailed description of each connection mode: To connect to a public endpoint using the MongoDB® shell: 1. Replace the following variables in the command as described: - ```sh + ```bash mongosh "mongodb+srv://{db-instance-id}.mgdb.{region}.scw.cloud" --tlsCAFile {your_certificate.pem} -u {username} ``` @@ -47,7 +47,7 @@ To connect to a public endpoint using the MongoDB® shell: 3. Enter your password when prompted. If the connection is successful, you should see the following message display on your console, and be able to write queries: - ```sh + ```bash Current Mongosh Log ID: 67ab0096d43bcc1d9ed4336d Connecting to: mongodb+srv://@{db-instance-id}.mgdb.{region}.scw.cloud/?appName=mongosh+2.3.8 Using MongoDB: 7.0.12 diff --git a/pages/managed-mongodb-databases/quickstart.mdx b/pages/managed-mongodb-databases/quickstart.mdx index 8bddd1ff84..fc9023f34a 100644 --- a/pages/managed-mongodb-databases/quickstart.mdx +++ b/pages/managed-mongodb-databases/quickstart.mdx @@ -64,7 +64,7 @@ Discover the Managed MongoDB® interface in the Scaleway console. ### Connect to a public endpoint 1. Replace the following variables in the command as described: - ```sh + ```bash mongosh "mongodb+srv://{instance_id}.mgdb.{region}.scw.cloud" --tlsCAFile {your_certificate.pem} -u {username} ``` @@ -78,7 +78,7 @@ Discover the Managed MongoDB® interface in the Scaleway console. 3. Enter your password when prompted. If the connection is successful, you should see the following message display on your console, and be able to write queries: - ```sh + ```bash The server generated these startup warnings when booting Powered by MongoDB® v0.9.0 and PostgreSQL 14.6. ``` @@ -87,13 +87,13 @@ If the connection is successful, you should see the following message display on Follow the same procedure as above to connect to a private endpoint for one node, replacing `{privateNetorkName}` with the name of your Private Network: - ```sh + ```bash mongosh "mongodb://{instance_id}-0.{privateNetworkName}" -u {username} ``` ### Connect to a private endpoint with mulitple nodes For multiple nodes, replace `{db-instance-id}` with the Database Instance UUID of each respective Instance, and `{privateNetworkName}` with the names of your Private Network: - ```sh + ```bash "mongodb://{instance_id}-0.{privateNetworkName},{instance_id}-1.{privateNetworkName},{instance_id}-2.{privateNetworkName}" -u {username} ``` \ No newline at end of file diff --git a/pages/object-storage/api-cli/create-bucket-policy.mdx b/pages/object-storage/api-cli/create-bucket-policy.mdx index ee012cb57a..0bc2f80fba 100644 --- a/pages/object-storage/api-cli/create-bucket-policy.mdx +++ b/pages/object-storage/api-cli/create-bucket-policy.mdx @@ -65,12 +65,12 @@ Make sure that you have [installed and configured the AWS CLI](/object-storage/a 1. Open a terminal and access the folder containing the previously created `bucket-policy.json` file. 2. Run the command below to apply the policy. Make sure to replace `` with the name of your bucket. - ```sh + ```bash aws s3api put-bucket-policy --bucket --policy file://bucket-policy.json ``` 3. Run the command below to display the bucket policy applied to your bucket. - ```sh + ```bash aws s3api get-bucket-policy --bucket --query Policy --output text | jq ``` An output similar to the following displays: @@ -106,7 +106,7 @@ To delete a bucket policy, you must have [Owner](/iam/concepts/#owner) status, o Run the command below to delete the policy of a specific bucket. Replace `` with the name of your bucket. - ```sh + ```bash aws s3api delete-bucket-policy --bucket ``` diff --git a/pages/object-storage/api-cli/enable-sse-c.mdx b/pages/object-storage/api-cli/enable-sse-c.mdx index c5f84e7935..633235bf4c 100644 --- a/pages/object-storage/api-cli/enable-sse-c.mdx +++ b/pages/object-storage/api-cli/enable-sse-c.mdx @@ -40,19 +40,19 @@ SSE-C requires a 256-bit (32-byte) base64-encoded key, and its MD5 digest. If yo 1. In a terminal, run the following command to generate a random 32-byte key, and store it in a file named `sse.key`: - ```sh + ```bash openssl rand -out ssec.key 32 ``` 2. Run the following command to encode your key in base64, and export it as a variable: - ```sh + ```bash ENCRYPTION_KEY=$(cat ssec.key | base64) ``` 3. Run the following command to generate the base64-encoded 128-bit MD5 digest of your encryption key, and export it as an environment variable: - ```sh + ```bash KEY_DIGEST=$(openssl dgst -md5 -binary ssec.key | base64) ``` @@ -64,7 +64,7 @@ If you lose the encryption key, you also lose the data encrypted with it, as you 1. Run the command below to upload an object and encrypt it. Make sure to replace ``, ``, and `` with the correct values. - ```sh + ```bash aws s3api put-object \ --bucket \ --key \ @@ -80,7 +80,7 @@ If you lose the encryption key, you also lose the data encrypted with it, as you 2. (Optional) Run the command below to check that you **cannot** download your object without the encryption key and its digest. Make sure to replace ``, ``, and `` with the correct values. - ```sh + ```bash aws s3api head-object \ --bucket \ --key @@ -92,7 +92,7 @@ If you lose the encryption key, you also lose the data encrypted with it, as you 3. Run the command below to download the previously uploaded object and decrypt it. Make sure to replace ``, ``, and `` with the correct values. - ```sh + ```bash aws s3api get-object \ --bucket \ --key \ @@ -106,7 +106,7 @@ If you lose the encryption key, you also lose the data encrypted with it, as you You can store your keys in files and pass them as arguments using the format below: - ```sh + ```bash --sse-customer-key file://path/to/file \ --sse-customer-key-md5 file://path/to/file ``` @@ -118,19 +118,19 @@ The [AWS S3 CLI](https://awscli.amazonaws.com/v2/documentation/api/latest/refere 1. In a terminal, run the following command to generate a random 32-byte key, and store it in a file named `sse.key`: - ```sh + ```bash openssl rand -out ssec.key 32 ``` 2. Run the command below to copy a local file to your Object Storage bucket. Make sure to replace the placeholders with the appropriate values. - ```sh + ```bash aws s3 cp s3:/// \ --sse-c AES256 \ --sse-c-key fileb://ssec.key ``` 3. Run the command below to download the file from your Object Storage bucket to your local file system. Make sure to replace the placeholders with the appropriate values. - ```sh + ```bash aws s3 cp s3:/// \ --sse-c AES256 \ --sse-c-key fileb://ssec.key diff --git a/pages/object-storage/api-cli/generate-aws4-auth-signature.mdx b/pages/object-storage/api-cli/generate-aws4-auth-signature.mdx index e8d867fbcd..90c62d4f0a 100644 --- a/pages/object-storage/api-cli/generate-aws4-auth-signature.mdx +++ b/pages/object-storage/api-cli/generate-aws4-auth-signature.mdx @@ -42,7 +42,7 @@ The canonical request included in the signature is made up of: This means that the following example: -```sh +```bash GET /?acl HTTP/1.1 Host: my-bucket.s3.ams-nl.scw.cloud @@ -52,7 +52,7 @@ x-amz-date: 20190411T101653Z Would be based on the following canonical code: -```sh +```bash GET / acl= @@ -66,7 +66,7 @@ e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 **Example authorization header** -```sh +```bash Authorization: AWS4-HMAC-SHA256 Credential=SCWN63TF9BMCPVNARV5A/20190411/nl-ams/s3/aws4_request, SignedHeaders=host;x-amz-acl;x-amz-content-sha256;x-amz-date, @@ -75,7 +75,7 @@ Signature=6cab03bef74a80a0441ab7fd33c829a2cdb46bba07e82da518cdb78ac238fda5 **Signing example (pseudo code)** -```sh +```bash canonicalRequest = ` ${HTTPMethod}\n ${canonicalURI}\n diff --git a/pages/object-storage/api-cli/object-lock.mdx b/pages/object-storage/api-cli/object-lock.mdx index 543df64f50..94f7d06ef9 100644 --- a/pages/object-storage/api-cli/object-lock.mdx +++ b/pages/object-storage/api-cli/object-lock.mdx @@ -186,7 +186,7 @@ aws s3api create-bucket --object-lock-enabled-for-bucket --bucket test-is-lock ``` By default, object lock is not activated on buckets. To activate it, you can run the following command: -```sh +```bash aws s3api put-object-lock-configuration \ --bucket my-bucket-with-object-lock \ --object-lock-configuration '{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE", "Days": 50 }}}' @@ -198,7 +198,7 @@ aws s3api put-object-lock-configuration \ To view the object lock configuration of a bucket, run the following command: - ```sh + ```bash aws s3api get-object-lock-configuration --bucket test-is-lock ``` @@ -425,7 +425,7 @@ GET /lockedbucket/myobject?legal-hold HTTP/1.1 Run the command below to apply a legal hold: -```sh +```bash aws s3api put-object-legal-hold --bucket test-is-lock --key go @@ -435,7 +435,7 @@ aws s3api put-object-legal-hold Run the command below to retrieve the legal hold status of an object: -```sh +```bash aws s3api get-object-legal-hold --bucket test-is-lock --key go diff --git a/pages/object-storage/how-to/restore-an-object-from-glacier.mdx b/pages/object-storage/how-to/restore-an-object-from-glacier.mdx index 5b451272a3..27ca97f17a 100644 --- a/pages/object-storage/how-to/restore-an-object-from-glacier.mdx +++ b/pages/object-storage/how-to/restore-an-object-from-glacier.mdx @@ -45,7 +45,7 @@ If you have numerous files in a bucket that you would like to restore, we recomm 1. Run the command below in a terminal to create a list of objects to restore, and store it as a text file. Make sure to replace `` with the name of your bucket. - ```sh + ```bash aws s3api list-objects-v2 --bucket --query "Contents[?StorageClass=='GLACIER']" --output text | awk '{print $2}' > glacier-restore.txt ``` @@ -54,7 +54,7 @@ If you have numerous files in a bucket that you would like to restore, we recomm 2. Run the following command to restore every object listed in the previous step. Make sure to replace `` with the name of your bucket, and `NUM` with the desired number of days. - ```sh + ```bash for x in `cat glacier-restore.txt` do aws s3api restore-object --restore-request Days=NUM --bucket --key "$x" diff --git a/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx b/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx index c0aea8672a..2e9052f70c 100644 --- a/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx +++ b/pages/object-storage/how-to/use-obj-stor-with-private-networks.mdx @@ -60,7 +60,7 @@ You must create an Instance without a flexible IP using the following specificat 2. Configure the following route to the Object Storage platform: - ```sh + ```bash # set this to keep the network on the instance ip route add 10.0.0.0/8 via `ip route | grep default | awk '{print $3} '` dev ens2 # dhcp on pn interface diff --git a/pages/object-storage/troubleshooting/cannot-restore-glacier.mdx b/pages/object-storage/troubleshooting/cannot-restore-glacier.mdx index 0ea770daff..7b2ec1cd4a 100644 --- a/pages/object-storage/troubleshooting/cannot-restore-glacier.mdx +++ b/pages/object-storage/troubleshooting/cannot-restore-glacier.mdx @@ -25,13 +25,13 @@ The time it takes to restore an object depends on the size of the object, and if Run the following command in a terminal to retrieve the metadata of the object you want to restore: -```sh +```bash aws s3api head-object --bucket --key ``` An output similar to the following displays: -```sh +```bash { "AcceptRanges": "bytes", "Restore": "ongoing-request=\"true\"", diff --git a/pages/object-storage/troubleshooting/deleted-objects-still-billed.mdx b/pages/object-storage/troubleshooting/deleted-objects-still-billed.mdx index 791252b9c0..3f8abf88d0 100644 --- a/pages/object-storage/troubleshooting/deleted-objects-still-billed.mdx +++ b/pages/object-storage/troubleshooting/deleted-objects-still-billed.mdx @@ -34,12 +34,12 @@ The amount billed does not correspond to the objects that are present in my Scal - If the versioning is `disabled`, the issue is not linked to versioning. - If the versioning is `enabled` or `suspended`, you may have multiple versions of your objects. 2. Use the [ListObjectVersions](/object-storage/api-cli/bucket-operations/#listobjectversions) command to list the versions of the objects in your bucket: - ```sh + ```bash aws s3api list-object-versions --bucket BucketName ``` A list of all the objects versions and delete markers present in the bucket appears. 3. Delete the unwanted versions and delete markers using the [DeleteObject] command with a versionId specified: - ```sh + ```bash aws s3api delete-object --bucket BucketName --key ObjectName --version-id ObjectVersion ``` Refer to the [official Amazon S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/versioning-workflows.html) for more information on how versioning works. @@ -47,7 +47,7 @@ Refer to the [official Amazon S3 documentation](https://docs.aws.amazon.com/Amaz ### Multipart uploads 1. Check if some multipart uploads are ongoing using the [ListMultipartUpload](/object-storage/api-cli/multipart-uploads/#listing-multipart-uploads) command: - ```sh + ```bash list-multipart-uploads --bucket BucketName ``` A list of ongoing multipart uploads displays. diff --git a/pages/opensearch/how-to/connect-to-opensearch-deployment.mdx b/pages/opensearch/how-to/connect-to-opensearch-deployment.mdx index 59e8bfcce7..d64df84f87 100644 --- a/pages/opensearch/how-to/connect-to-opensearch-deployment.mdx +++ b/pages/opensearch/how-to/connect-to-opensearch-deployment.mdx @@ -41,7 +41,7 @@ You are now connected to your Cloud Essentials for OpenSearch deployment. Refer 4. In a terminal, run the following command to interact with your deployment. Remember to replace the placeholders with the appropriate values: - ```sh + ```bash curl -X GET "/_cluster/health" -ku : ``` diff --git a/pages/opensearch/quickstart.mdx b/pages/opensearch/quickstart.mdx index cb4123f5bc..e26b19baaf 100644 --- a/pages/opensearch/quickstart.mdx +++ b/pages/opensearch/quickstart.mdx @@ -62,7 +62,7 @@ You are now connected to your Cloud Essentials for OpenSearch deployment. 2. In a terminal, run the following command to interact with your deployment. Remember to replace the placeholders with the appropriate values: - ```sh + ```bash curl -X GET "/_cluster/health" -ku : ``` diff --git a/pages/queues/api-cli/queues-aws-cli.mdx b/pages/queues/api-cli/queues-aws-cli.mdx index 7329e862ce..4dafd3df8a 100644 --- a/pages/queues/api-cli/queues-aws-cli.mdx +++ b/pages/queues/api-cli/queues-aws-cli.mdx @@ -23,19 +23,19 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 1. Use the following command to create a queue: - ```sh + ```bash aws sqs create-queue --queue-name MyQueue | tee my-queue.json ``` 2. Use the following command to list existing queues: - ```sh + ```bash aws sqs list-queues ``` 3. Use the following command to send messages to a queue: - ```sh + ```bash aws sqs send-message --queue-url $(jq -r .QueueUrl my-queue.json) --message-body "Hello world!" aws sqs send-message --queue-url $(jq -r .QueueUrl my-queue.json) --message-body "Second Message." @@ -43,7 +43,7 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 4. Use the following command to receive messages: - ```sh + ```bash aws sqs receive-message --queue-url $(jq -r .QueueUrl my-queue.json) | tee message1.json aws sqs receive-message --queue-url $(jq -r .QueueUrl my-queue.json) | tee message2.json @@ -51,7 +51,7 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 5. Use the following command to delete messages. This is necessary as once a message has been processed on your consumer side (typically by a worker), it will be re-queued unless it is explicitly deleted. - ```sh + ```bash aws sqs delete-message --queue-url $(jq -r .QueueUrl my-queue.json) --receipt-handle $(jq -r .Messages[0].ReceiptHandle message1.json) aws sqs delete-message --queue-url $(jq -r .QueueUrl my-queue.json) --receipt-handle $(jq -r .Messages[0].ReceiptHandle message2.json) @@ -59,6 +59,6 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 6. Use the following command to delete the queue itself: - ```sh + ```bash aws sqs delete-queue --queue-url $(jq -r .QueueUrl my-queue.json) ``` \ No newline at end of file diff --git a/pages/scaleway-cli/quickstart.mdx b/pages/scaleway-cli/quickstart.mdx index a259fc7121..b0eb7b1013 100644 --- a/pages/scaleway-cli/quickstart.mdx +++ b/pages/scaleway-cli/quickstart.mdx @@ -33,7 +33,7 @@ Download the Scaleway CLI using a package manager according to your operating sy Install the latest stable release on macOS using [Homebrew](https://formulae.brew.sh/formula/scw): -```sh +```bash brew install scw ``` @@ -41,7 +41,7 @@ brew install scw Install the latest stable release on Arch Linux from [official repositories](https://archlinux.org/packages/extra/x86_64/scaleway-cli/) using `pacman`: -```sh +```bash pacman -S scaleway-cli ``` @@ -168,7 +168,7 @@ Refer to the [Scaleway CLI repository](https://github.com/scaleway/scaleway-cli) 11. Run `scw help` to display the available commands: - ```sh + ```bash $ scw help Get help about how the CLI works diff --git a/pages/serverless-containers/troubleshooting/tests-fail-on-container.mdx b/pages/serverless-containers/troubleshooting/tests-fail-on-container.mdx index 877857a868..83dd5b4017 100644 --- a/pages/serverless-containers/troubleshooting/tests-fail-on-container.mdx +++ b/pages/serverless-containers/troubleshooting/tests-fail-on-container.mdx @@ -20,6 +20,6 @@ Testing **Private** Serverless Containers is not possible using the Scaleway con - Change the visibility of your function to **public**. Public containers can be executed anonymously. - Make sure you have created an [authentication token](/serverless-containers/how-to/create-auth-token-from-console/) for your private function, then exexute a `curl` request from a terminal, as shown below: - ```sh + ```bash curl -H "X-Auth-Token: " \ ``` \ No newline at end of file diff --git a/pages/serverless-functions/how-to/package-function-dependencies-in-zip.mdx b/pages/serverless-functions/how-to/package-function-dependencies-in-zip.mdx index 6a1dd398d6..1f94fc49e8 100644 --- a/pages/serverless-functions/how-to/package-function-dependencies-in-zip.mdx +++ b/pages/serverless-functions/how-to/package-function-dependencies-in-zip.mdx @@ -34,7 +34,7 @@ Avoid compressing your function using the file explorer or finder, as this metho Use the `zip` command to create an archive of your function and its dependencies: -```sh +```bash zip -r myFunction.zip myFunction/ ``` @@ -83,7 +83,7 @@ The example above will create a `.zip` archive that contains the myFunction fold 3. Run `ls -lh` to get the size of your archive in bytes: - ```sh + ```bash ls -lh -rw-r--r-- 1 user group 675 Apr 18 15:42 ``` @@ -101,7 +101,7 @@ The example above will create a `.zip` archive that contains the myFunction fold ``` 6. Run the following command to check that the presigned URL has been properly exported: - ```sh + ```bash echo $PRESIGNED_URL ``` @@ -258,7 +258,7 @@ The example above will create a `.zip` archive that contains the myFunction fold To package your functions into an archive that can be uploaded to the console, you can use the `zip` utility: - ```sh + ```bash zip -r functions.zip handlers/ package/ ``` diff --git a/pages/serverless-functions/reference-content/code-examples.mdx b/pages/serverless-functions/reference-content/code-examples.mdx index be9b48a163..ebaf47655b 100644 --- a/pages/serverless-functions/reference-content/code-examples.mdx +++ b/pages/serverless-functions/reference-content/code-examples.mdx @@ -107,7 +107,7 @@ def handle(event, context): Example of reading URL parameters: -```sh +```bash curl https://myfunc/user/?id=1 ``` @@ -360,7 +360,7 @@ module.exports.myHandler = (event, context, callback) => { Example of reading URL parameters: -```sh +```bash curl https://myfunc/user/?id=1 ``` @@ -653,7 +653,7 @@ func Handle(w http.ResponseWriter, r *http.Request) { Example of reading URL parameters: -```sh +```bash curl https://myfunc/user/?id=1 ``` @@ -794,7 +794,7 @@ function handle($event, $context) { Example of reading URL parameters: -```sh +```bash curl https://myfunc/user/?id=1 ``` diff --git a/pages/serverless-functions/reference-content/deploy-function.mdx b/pages/serverless-functions/reference-content/deploy-function.mdx index 2eb6e8d93a..ca834bf88f 100644 --- a/pages/serverless-functions/reference-content/deploy-function.mdx +++ b/pages/serverless-functions/reference-content/deploy-function.mdx @@ -68,7 +68,7 @@ Installation instructions and documentation are available in the [Scaleway CLI r Below is an example of using the CLI to deploy a function: -```sh +```bash scw function namespace create name=hello # saving my namespace ID for later use scw function function create name=myfunc runtime=go120 namespace-id= diff --git a/pages/serverless-functions/reference-content/local-testing.mdx b/pages/serverless-functions/reference-content/local-testing.mdx index e442dd4fde..9b2cc28fb4 100644 --- a/pages/serverless-functions/reference-content/local-testing.mdx +++ b/pages/serverless-functions/reference-content/local-testing.mdx @@ -25,7 +25,7 @@ Refer to the [NodeJS local testing repository](https://github.com/scaleway/serve **Quickstart** 1. Install the Scaleway Serverless Functions package using `npm`: - ```sh + ```bash npm i @scaleway/serverless-functions ``` @@ -84,12 +84,12 @@ Refer to the [NodeJS local testing repository](https://github.com/scaleway/serve ``` 3. In a terminal, run the command below to execute your file and start the local webserver: - ```sh + ```bash node handler.js ``` 4. In another terminal session, run the command below: - ```sh + ```bash curl -X GET http://localhost:8080 ``` @@ -107,7 +107,7 @@ Refer to the [Python local testing repository](https://github.com/scaleway/serve **Quickstart** 1. Install the Scaleway Serverless Functions package using `pip`: - ```sh + ```bash pip install scaleway-functions-python ``` @@ -131,12 +131,12 @@ Refer to the [Python local testing repository](https://github.com/scaleway/serve 3. In a terminal, run the command below to execute your file and start the local webserver: - ```sh + ```bash python handler.py ``` 4. In another terminal session, run the command below: - ```sh + ```bash curl http://localhost:8080 ``` @@ -149,7 +149,7 @@ Refer to the [Python local testing repository](https://github.com/scaleway/serve The function above only processes GET requests, as declared in its code. Other requests will return the defined error message: - ```sh + ```bash curl -X POST http://localhost:8080 > Invalid method! ``` @@ -165,7 +165,7 @@ Refer to the [Go local testing repository](https://github.com/scaleway/serverles **Quickstart** 1. Install the Scaleway Serverless Functions package using `go get`: - ```sh + ```bash go get github.com/scaleway/serverless-functions-go ``` @@ -225,18 +225,18 @@ Refer to the [Go local testing repository](https://github.com/scaleway/serverles 4. Run the commands below to generate a `mod` file, then automatically add the modules to it: - ```sh + ```bash go mod init localfunc && go mod tidy ``` 5. Run the command below to create a new function for local testing: - ```sh + ```bash go run cmd/main.go ``` 6. In another terminal session, run the command below: - ```sh + ```bash curl http://localhost:8080 ``` diff --git a/pages/serverless-functions/troubleshooting/tests-fail-on-function.mdx b/pages/serverless-functions/troubleshooting/tests-fail-on-function.mdx index 482285213f..7b4f5c5382 100644 --- a/pages/serverless-functions/troubleshooting/tests-fail-on-function.mdx +++ b/pages/serverless-functions/troubleshooting/tests-fail-on-function.mdx @@ -20,6 +20,6 @@ Testing **Private** Serverless Functions is not possible using the Scaleway cons - Change the visibility of your function to **public**. Public functions can be executed anonymously. - Make sure you have created an [authentication token](/serverless-functions/how-to/create-auth-token-from-console/) for your private function, then execute a `curl` request from a terminal, as shown below: - ```sh + ```bash curl -H "X-Auth-Token: " \ ``` \ No newline at end of file diff --git a/pages/serverless-jobs/faq.mdx b/pages/serverless-jobs/faq.mdx index ebb1237ad1..3e3badc24d 100644 --- a/pages/serverless-jobs/faq.mdx +++ b/pages/serverless-jobs/faq.mdx @@ -134,7 +134,7 @@ When starting a job, you can use contextual options to define the number of jobs You can copy an image from an external registry by [logging in to the Scaleway Container Registry](/container-registry/how-to/connect-docker-cli/) using the Docker CLI, and by copying the image as shown below: -```sh +```bash docker pull alpine:latest docker tag alpine:latest rg.fr-par.scw.cloud/example/alpine:latest docker push rg.fr-par.scw.cloud/example/alpine:latest @@ -142,7 +142,7 @@ docker push rg.fr-par.scw.cloud/example/alpine:latest Alternatively, you can use tools such as [Skopeo](https://github.com/containers/skopeo) to copy the image: -```sh +```bash skopeo login rg.fr-par.scw.cloud -u nologin -p $SCW_SECRET_KEY skopeo copy --override-os linux docker://docker.io/alpine:latest docker://rg.fr-par.scw.cloud/example/alpine:latest ``` diff --git a/pages/serverless-jobs/how-to/build-push-container-image.mdx b/pages/serverless-jobs/how-to/build-push-container-image.mdx index e7757179b1..50941c656a 100644 --- a/pages/serverless-jobs/how-to/build-push-container-image.mdx +++ b/pages/serverless-jobs/how-to/build-push-container-image.mdx @@ -77,19 +77,19 @@ CMD ["./main"] 3. Run the command below to log in to your Scaleway account in the terminal. Make sure that you replace the placeholder values with your own. - ```sh + ```bash docker login rg.fr-par.scw.cloud/your-container-registry-namespace -u nologin --password-stdin <<< "$SCW_SECRET_KEY" ``` 4. Tag your Docker image so it matches your Scaleway registry's format: - ```sh + ```bash docker tag my-application:latest rg.fr-par.scw.cloud/your-container-registry-namespace/my-application:latest ``` 5. Push the Docker image to the Scaleway Container Registry: - ```sh + ```bash docker push rg.fr-par.scw.cloud/your-container-registry-namespace/my-application:latest ``` diff --git a/pages/serverless-jobs/how-to/execute-complex-commands.mdx b/pages/serverless-jobs/how-to/execute-complex-commands.mdx index 0825c04874..d6ab7681b7 100644 --- a/pages/serverless-jobs/how-to/execute-complex-commands.mdx +++ b/pages/serverless-jobs/how-to/execute-complex-commands.mdx @@ -50,7 +50,7 @@ Your file can now be passed to your Serverless Job as a secret reference. 3. Select the **external** container registry. 4. Enter the following image URL: - ```sh + ```bash scaleway/cli:latest ``` @@ -59,7 +59,7 @@ Your file can now be passed to your Serverless Job as a secret reference. 6. From the **Data** tab, add your command file as a [secrets reference](/serverless-functions/concepts/#secrets). Refer to the [dedicated documentation](/serverless-jobs/how-to/reference-secret-in-job/) on secrets for more information. 7. From the **Execution** tab, add the following startup command to call your file: - ```sh + ```bash bash /complex_command.sh start ``` diff --git a/pages/serverless-sql-databases/api-cli/import-data-to-serverless-sql-databases.mdx b/pages/serverless-sql-databases/api-cli/import-data-to-serverless-sql-databases.mdx index d12c69cf2c..602f39c35c 100644 --- a/pages/serverless-sql-databases/api-cli/import-data-to-serverless-sql-databases.mdx +++ b/pages/serverless-sql-databases/api-cli/import-data-to-serverless-sql-databases.mdx @@ -41,14 +41,14 @@ To complete this procedure, you must have installed PostgreSQL 16 (or newer) wit 1. Run the following command to download a local export of your database with `pg_dump`, then enter your password when prompted: - ```sh + ```bash pg_dump --no-privileges --no-owner -U {username} -h {host} --port {port} -Fc {databasename} > my-backup ``` You can download specific tables using the `-t` option: - ```sh + ```bash pg_dump -t table1name -t table2name --no-privileges --no-owner -U {username} -h {host} --port {port} -Fc {databasename} > my-backup ``` @@ -57,7 +57,7 @@ To complete this procedure, you must have installed PostgreSQL 16 (or newer) wit 3. Run the command below to import data into your Serverless SQL Database using `pg_restore`. Make sure to replace the placeholders with your Serverless SQL Database connection parameters: - ```sh + ```bash pg_restore --no-privileges --no-owner --clean --if-exists -U {username} -h {host} --port 5432 -d {databasename} my-backup ``` @@ -139,7 +139,7 @@ To complete this procedure, you must have: 1. In a terminal, access the directory containing the backup file, then run the command below to import data to your Serverless SQL Database using `pg_restore`. Make sure to replace the placeholders with your Serverless SQL Database connection parameters: - ```sh + ```bash pg_restore --no-privileges --no-owner --clean --if-exists -U {username} -h {host} --port 5432 -d {databasename} my-backup ``` @@ -171,13 +171,13 @@ You can create a `.csv` file from an existing PostgreSQL table with the [psql \c 1. In a terminal, access the folder containing your data file: - ```sh + ```bash cd path/to/my-table.csv ``` 2. Connect to your Serverless SQL Database using psql: - ```sh + ```bash psql "postgresql://{username}:{password}@{host}:5432/{databasename}?sslmode=require" ``` diff --git a/pages/serverless-sql-databases/api-cli/postgrest-row-level-security.mdx b/pages/serverless-sql-databases/api-cli/postgrest-row-level-security.mdx index f9d8397fd5..722b10744e 100644 --- a/pages/serverless-sql-databases/api-cli/postgrest-row-level-security.mdx +++ b/pages/serverless-sql-databases/api-cli/postgrest-row-level-security.mdx @@ -71,7 +71,7 @@ Due to connection pooling, Serverless SQL Databases currently only support trans - `db-uri` must use credentials with an [application](/iam/how-to/create-application/) having **ServerlessSQLDatabaseDataReadWrite** permissions (not **ServerlessSQLDatabaseReadWrite** or **ServerlessSQLDatabaseFullAccess**) - `db-schemas` is your database schema. Use `public` as a default value. - `jwt-secret` is a token generated using the following command: - ```sh + ```bash openssl rand -base64 32 ``` diff --git a/pages/serverless-sql-databases/api-cli/secure-connection-ssl-tls.mdx b/pages/serverless-sql-databases/api-cli/secure-connection-ssl-tls.mdx index 0043f7a07f..b102f5e09b 100644 --- a/pages/serverless-sql-databases/api-cli/secure-connection-ssl-tls.mdx +++ b/pages/serverless-sql-databases/api-cli/secure-connection-ssl-tls.mdx @@ -28,7 +28,7 @@ Configuration examples for languages, frameworks and tools: Starting from PostgreSQL 16, you can set up SSL/TLS to rely on the default certification authority certificates trusted by your operating system. To do so, use the additional configuration parameters `sslmode=verify-full` and `sslrootcert=system`. For instance, your full connection string should be: - ```sh + ```bash postgresql://{username}:{password}@{host}:{port}/{databasename}?sslmode=verify-full&sslrootcert=system ``` @@ -43,7 +43,7 @@ Alternatively, you can also download the trusted root Certificate used to sign o Your full connection string should be the output of this command: -```sh +```bash echo "postgresql://{username}:{password}@{host}:{port}/{databasename}?sslmode=verify-ca&sslrootcert=$(echo ~/.postgresql/isrgx1root.pem)" ``` @@ -144,7 +144,7 @@ By default, Prisma uses its built-in PostgreSQL driver which does not support `s To ensure SSL/TLS is enforced and the server certificate is valid, add these two parameters to your connection string in your `.env` file: -```sh +```bash DATABASE_URL=postgresql://{username}:{password}@{host}:{port}/{databasename}?sslmode=require&sslaccept=strict ``` @@ -259,7 +259,7 @@ As the official client bundled with PostgreSQL, [psql](https://www.postgresql.or Edit your connection parameters to add `sslmode=verify-full` and `sslrootcert=system` parameters: - ```sh + ```bash psql "postgresql://{username}:{password}@{host}:{port}/{databasename}?sslmode=verify-full&sslrootcert=system" ``` diff --git a/pages/serverless-sql-databases/how-to/connect-to-a-database.mdx b/pages/serverless-sql-databases/how-to/connect-to-a-database.mdx index f966e52aef..9a7b63e25b 100644 --- a/pages/serverless-sql-databases/how-to/connect-to-a-database.mdx +++ b/pages/serverless-sql-databases/how-to/connect-to-a-database.mdx @@ -15,7 +15,7 @@ This page shows you how to set up the connection to a Serverless SQL Database us To connect to a Serverless SQL Database, you can either use a **connection string**, or **connection parameters**. - A connection string provides the necessary information and parameters to establish a connection between an [IAM user](/iam/concepts/#user) or [application](/iam/concepts/#application), and the database. The string is written as follows: - ```sh + ```bash postgres://[user-or-application-id]:[api-secret-key]@[database-hostname]:5432/[database-name]?sslmode=require ``` @@ -24,7 +24,7 @@ To connect to a Serverless SQL Database, you can either use a **connection strin - Connection parameters provide the necessary information and parameters to connect an [IAM user](/iam/concepts/#user) or [application](/iam/concepts/#application) to a database. The parameters are expressed in the `KEY="value"` format, as follows: - ```sh + ```bash PGUSER="user-or-application-id" PGPASSWORD="api-secret-key" PGHOST="database-hostname" @@ -62,7 +62,7 @@ To connect to a Serverless SQL Database, you can either use a **connection strin postgresql://example-user-4052-8739-2017c3d9c0d9:example-secret-key-3fd8f53210ec@example-host-4d89-8e7d-g56vb754.pg.sdb.fr-par.scw.cloud:5432/serverless-sqldb-example-db?sslmode=require ``` - Connection parameters: - ```sh + ```bash PGUSER="example-user-4052-8739-2017c3d9c0d9" PGPASSWORD="example-secret-key-3fd8f53210ec" PGHOST="example-host-4d89-8e7d-g56vb754.pg.sdb.fr-par.scw.cloud" @@ -85,7 +85,7 @@ To connect to a Serverless SQL Database, you can either use a **connection strin 1. Run the following command in a terminal (including the `"` characters): - ```sh + ```bash psql "[YOUR_CONNECTION_STRING]" ``` diff --git a/pages/serverless-sql-databases/troubleshooting/maximum-prepared-statements-reached.mdx b/pages/serverless-sql-databases/troubleshooting/maximum-prepared-statements-reached.mdx index 7b043fbad4..236c9c7dd6 100644 --- a/pages/serverless-sql-databases/troubleshooting/maximum-prepared-statements-reached.mdx +++ b/pages/serverless-sql-databases/troubleshooting/maximum-prepared-statements-reached.mdx @@ -13,7 +13,7 @@ dates: The error message below appears when trying to create a new prepared statement: -```sh +```bash FATAL: failed to prepare statement: adding the prepared statement would exceed the limit of 1048576 bytes for client connection: maximum allowed size of prepared statements for connection reached (SQLSTATE 53400). ``` @@ -30,13 +30,13 @@ The total size of [prepared statements](https://www.postgresql.org/docs/current/ - If you (or the PostgreSQL client you are using) created too many prepared statements in a single PostgreSQL connection, reduce the number of prepared statements, or use the [deallocate](https://www.postgresql.org/docs/current/sql-deallocate.html) feature to remove prepared statements in an active session: 1. Execute the command below to list the prepared statements in your current session: - ```sh + ```bash SELECT * FROM pg_prepared_statements; ``` 2. Run the command below to remove the desired prepared statement: - ```sh + ```bash DEALLOCATE prepared_statement_name; ``` diff --git a/pages/topics-and-events/api-cli/topics-events-aws-cli.mdx b/pages/topics-and-events/api-cli/topics-events-aws-cli.mdx index 9f54283171..272f5a8388 100644 --- a/pages/topics-and-events/api-cli/topics-events-aws-cli.mdx +++ b/pages/topics-and-events/api-cli/topics-events-aws-cli.mdx @@ -24,13 +24,13 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 1. Use the following command to create a topic: - ```sh + ```bash aws sns create-topic --name MyTopic | tee my-topic.json ``` 2. Use the following command to list existing topics: - ```sh + ```bash aws sns list-topics ``` @@ -40,7 +40,7 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 2. Use the following command to configure a subscription to push each new message sent on the topic to the HTTP server: - ```sh + ```bash aws sns subscribe --topic-arn $(jq -r .TopicArn my-topic.json) --protocol http --notification-endpoint | tee my-subscription.json ``` @@ -63,7 +63,7 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 4. Use the following command to confirm the subscription: - ```sh + ```bash curl "" ``` @@ -76,7 +76,7 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) Only the main generated endpoint of the function will work, not the aliases. The endpoint should match the following format: - ```sh + ```bash https://-.functions.fnc.fr-par.scw.cloud example: "https://mynamespacexxxxxxxx-myfunction.functions.fnc.fr-par.scw.cloud)" ``` @@ -84,7 +84,7 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 3. Use the following command to configure a subscription to push each new message sent on this topic to the function: - ```sh + ```bash aws sns subscribe --topic-arn $(jq -r .TopicArn my-topic.json) --protocol lambda --notification-endpoint | tee my-subscription.json ``` @@ -93,13 +93,13 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 1. Use the following command to list subscriptions: - ```sh + ```bash aws sns list-subscriptions ``` 2. Use the following command to publish a message on the topic: - ```sh + ```bash aws sns publish --topic-arn $(jq -r .TopicArn my-topic.json) --message "Hello world!" --message-deduplication-id $(date +%s) ``` @@ -110,25 +110,25 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) - For **lambda** targets, your function should have been called with the message as argument - ```sh + ```bash aws sqs receive-message --queue-url $(jq -r .QueueUrl my-queue.json) | tee message1.json ``` 4. Use the following command to delete the message received on a **Scaleway Queues** target. This is necessary to prevent it from being re-queued: - ```sh + ```bash aws sqs delete-message --queue-url $(jq -r .QueueUrl my-queue.json) --receipt-handle $(jq -r .Messages[0].ReceiptHandle message1.json) ``` 5. Use the following command to delete the subscription: - ```sh + ```bash aws sns unsubscribe --subscription-arn $(jq -r .SubscriptionArn my-subscription.json) ``` 6. Use the following command to delete the Scaleway queue (if you had a Scaleway Queues target): - ```sh + ```bash aws sqs delete-queue --queue-url $(jq -r .QueueUrl my-queue.json) ``` @@ -138,6 +138,6 @@ The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) 7. Use the following command to delete the topic: - ```sh + ```bash aws sns delete-topic --topic-arn $(jq -r .TopicArn my-topic.json) ``` diff --git a/pages/webhosting/troubleshooting/troubleshooting-dns-issues.mdx b/pages/webhosting/troubleshooting/troubleshooting-dns-issues.mdx index 55baf69ac3..ea37240710 100644 --- a/pages/webhosting/troubleshooting/troubleshooting-dns-issues.mdx +++ b/pages/webhosting/troubleshooting/troubleshooting-dns-issues.mdx @@ -48,7 +48,7 @@ You are experiencing DNS-related errors with your Scaleway Web Hosting service. #### Test local DNS resolution Use the `nslookup` or `dig` command to check domain resolution. - Example: - ```sh + ```bash nslookup yourdomain.com dig yourdomain.com +short ``` diff --git a/tutorials/access-mac-mini-with-reemo/index.mdx b/tutorials/access-mac-mini-with-reemo/index.mdx index c4aa51f0c4..23fc347629 100644 --- a/tutorials/access-mac-mini-with-reemo/index.mdx +++ b/tutorials/access-mac-mini-with-reemo/index.mdx @@ -65,7 +65,7 @@ In this tutorial, you will learn how to launch your Mac mini development environ 1. Log into your Mac mini using an exsiting [remote desktop connection](/apple-silicon/how-to/access-remote-desktop-mac-mini/). 2. Open a terminal once logged into your Mac mini, open a terminal. 3. Run the following command to install the Reemo client on your machine: - ```sh + ```bash # For older MacOS version (pre ventura) you can install this audio driver for audio capture support brew install blackhole-2ch # You need Administrator Priviledges diff --git a/tutorials/backup-mongodb-jobs/index.mdx b/tutorials/backup-mongodb-jobs/index.mdx index 4e5502a525..6756a416fb 100644 --- a/tutorials/backup-mongodb-jobs/index.mdx +++ b/tutorials/backup-mongodb-jobs/index.mdx @@ -52,7 +52,7 @@ Serverless Jobs are perfectly adapted for these autonomous tasks, as we do not n For more details about variables used by `cli`, refer to the [CLI config documentation](https://github.com/scaleway/scaleway-cli/blob/master/docs/commands/config.md). 9. In the **Execution** tab, define the command below, and replace the placeholders with the ID of your Managed MongoDB® Database Instance ID and the name of your snapshot: - ```sh + ```bash /scw mongodb snapshot create name="snapshot_$(date +%Y%m%d_%H%M%S)" expires-at=30d ``` diff --git a/tutorials/ceph-cluster/index.mdx b/tutorials/ceph-cluster/index.mdx index 2a4c20b7d7..9874adbc69 100644 --- a/tutorials/ceph-cluster/index.mdx +++ b/tutorials/ceph-cluster/index.mdx @@ -206,7 +206,7 @@ This tutorial guides you through deploying a three-node Ceph cluster with a RADO ``` 3. Test the setup: - ```sh + ```bash aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80 echo "Hello Ceph!" > testfile.txt aws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80 diff --git a/tutorials/configure-chatboxai-with-generative-apis/index.mdx b/tutorials/configure-chatboxai-with-generative-apis/index.mdx index e23a6b02f7..5f28cb6b2c 100644 --- a/tutorials/configure-chatboxai-with-generative-apis/index.mdx +++ b/tutorials/configure-chatboxai-with-generative-apis/index.mdx @@ -27,7 +27,7 @@ For most users, the easiest way is to use the pre-built packages available on th - **Linux:** 1. Download the AppImage from the [Chatbox AI website](https://chatboxai.app/en). 2. Make it executable: - ```sh + ```bash chmod +x ChatboxAI.AppImage ``` 3. Run the application. @@ -40,27 +40,27 @@ For most users, the easiest way is to use the pre-built packages available on th For advanced users, Chatbox AI can be built from source: 1. Clone the Chatbox AI repository: - ```sh + ```bash git clone https://github.com/Bin-Huang/chatbox.git ``` 2. Navigate to the project directory: - ```sh + ```bash cd chatbox ``` 3. Install dependencies using `npm`: - ```sh + ```bash npm install ``` 4. Start the application in development mode to verify it is working: - ```sh + ```bash npm run dev ``` 5. Build the application and package the installer for your current platform: - ```sh + ```bash npm run package ``` 6. (Optional) Build installers for all platforms: - ```sh + ```bash npm run package:all ``` diff --git a/tutorials/create-openwrt-image-for-scaleway/index.mdx b/tutorials/create-openwrt-image-for-scaleway/index.mdx index 4b115cc02e..5616185423 100644 --- a/tutorials/create-openwrt-image-for-scaleway/index.mdx +++ b/tutorials/create-openwrt-image-for-scaleway/index.mdx @@ -193,7 +193,7 @@ In this tutorial, we do not set up cloud-init, but use the same magic IP mechani 2. Create the fetch script: - ```sh + ```bash cat </etc/init.d/fetch_ssh_keys #!/bin/sh /etc/rc.common @@ -213,7 +213,7 @@ In this tutorial, we do not set up cloud-init, but use the same magic IP mechani 3. Add the script to `rc.d`. - ```sh + ```bash ln -s /etc/init.d/fetch_ssh_keys /etc/rc.d/S97fetch_ssh_keys ``` diff --git a/tutorials/create-valheim-server/index.mdx b/tutorials/create-valheim-server/index.mdx index 36b0eb6dc6..725391af56 100644 --- a/tutorials/create-valheim-server/index.mdx +++ b/tutorials/create-valheim-server/index.mdx @@ -110,7 +110,7 @@ scw instance server create type=DEV1-L zone=fr-par-2 image=ubuntu_focal root-vol nano start_valheim_server.sh ``` 9. Modify the script as follows to download automatically the latest Valheim updates from Steam during the applications' start. Replace the values `My Server` with the name of your server, `Dedicated` with the world you want to use on the server, and `secret_password` with your servers password. Make sure to edit the path of your `valheim_server` executable to the complete path to the binary file. Once edited, save the file by pressing **CTRL+O**, then leave nano by pressing **CTRL+X**. - ```sh + ```bash export templdpath=$LD_LIBRARY_PATH export LD_LIBRARY_PATH=./linux64:$LD_LIBRARY_PATH export SteamAppId=892970 diff --git a/tutorials/deploy-laravel-on-serverless-containers/index.mdx b/tutorials/deploy-laravel-on-serverless-containers/index.mdx index 26115a3afd..a38d104f2f 100644 --- a/tutorials/deploy-laravel-on-serverless-containers/index.mdx +++ b/tutorials/deploy-laravel-on-serverless-containers/index.mdx @@ -215,7 +215,7 @@ In this section, we will focus on building the containerized image. With Docker, ``` 5. Build the docker image. - ```sh + ```bash docker build -t my-image . ``` @@ -226,7 +226,7 @@ In this section, we will focus on building the containerized image. With Docker, 2. Run the following command in your local terminal to log in to the newly created Container Registry. - ```sh + ```bash docker login rg.fr-par.scw.cloud/namespace-zen-feistel -u nologin --password-stdin <<< "$SCW_SECRET_KEY" ``` @@ -236,7 +236,7 @@ In this section, we will focus on building the containerized image. With Docker, 3. Tag the image and push it to the Container Registry namespace. - ```sh + ```bash docker tag my-image rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1 docker push rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1 ``` @@ -309,7 +309,7 @@ class ProcessPodcast implements ShouldQueue ``` Then, use `hey` to send 400 requests (20 concurrent requests) to this route. -```sh +```bash hey -n 400 -q 20 https://example.com/test ``` diff --git a/tutorials/deploy-meilisearch-instance/index.mdx b/tutorials/deploy-meilisearch-instance/index.mdx index 96b77708de..0b0a84ca52 100644 --- a/tutorials/deploy-meilisearch-instance/index.mdx +++ b/tutorials/deploy-meilisearch-instance/index.mdx @@ -88,7 +88,7 @@ This tutorial shows you how to deploy a Meilisearch search engine on a [Scaleway ## Accessing Meilisearch remotely 1. Open a new terminal and export your environment variables. Make sure that you replace `{INSTANCE_PUBLIC_DNS}` and `{MEILI_MASTER_KEY}` with your own variables. - ```sh + ```bash export INSTANCE_PUBLIC_DNS={INSTANCE_PUBLIC_DNS} export MEILI_MASTER_KEY={MEILI_MASTER_KEY} ``` @@ -98,7 +98,7 @@ This tutorial shows you how to deploy a Meilisearch search engine on a [Scaleway export MEILI_MASTER_KEY=LtRrsh68IdT2jKDH5DdXhA== ``` 2. Run the following command to access Meilisearch remotely: - ```sh + ```bash curl -X GET 'http://'$INSTANCE_PUBLIC_DNS':7700/version' -H 'Authorization: Bearer '$MEILI_MASTER_KEY ``` @@ -111,7 +111,7 @@ If you have created your Instance within a **Private Network** or if you are usi ## Creating an index and adding data to it 1. In the same terminal as the previous steps, paste the following command to create a new index named `movies`: - ```sh + ```bash curl -X POST 'http://'$INSTANCE_PUBLIC_DNS':7700/indexes' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer '$MEILI_MASTER_KEY \ @@ -122,7 +122,7 @@ If you have created your Instance within a **Private Network** or if you are usi ``` 2. Add sample data to the `movies` index. In this case, we are adding 2 movies. - ```sh + ```bash curl -X POST 'http://'$INSTANCE_PUBLIC_DNS':7700/indexes/movies/documents' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer '$MEILI_MASTER_KEY \ @@ -145,7 +145,7 @@ If you have created your Instance within a **Private Network** or if you are usi ``` 3. Search for the term `mystery` in the `movies` index: - ```sh + ```bash curl -X POST 'http://'$INSTANCE_PUBLIC_DNS':7700/indexes/movies/search' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer '$MEILI_MASTER_KEY \ diff --git a/tutorials/easydeploy-vault/index.mdx b/tutorials/easydeploy-vault/index.mdx index ad614eb5d7..027f20a70d 100644 --- a/tutorials/easydeploy-vault/index.mdx +++ b/tutorials/easydeploy-vault/index.mdx @@ -45,7 +45,7 @@ Vault is used to secure, store and protect secrets and other sensitive data usin ## Initializing and unsealing Vault 1. Check the status of your Vault using the `kubectl` command. - ```sh + ```bash kubectl get pods -l app.kubernetes.io/name=vault ``` @@ -53,7 +53,7 @@ Vault is used to secure, store and protect secrets and other sensitive data usin 2. Initialize Vault. Replace `vault-0` with the name of your application. If your application is called `vault-application` the value will be `vault-application-0`. - ```sh + ```bash kubectl exec -it vault-0 -- vault operator init ``` @@ -61,31 +61,31 @@ Vault is used to secure, store and protect secrets and other sensitive data usin 3. Unseal Vault using three unseal keys retrieved in the previous step: - ```sh + ```bash kubectl exec -it vault-0 -- vault operator unseal kubectl exec -it vault-0 -- vault operator unseal kubectl exec -it vault-0 -- vault operator unseal ``` 4. Login to Vault using the initial root token generated in step two: - ```sh + ```bash kubectl exec -it vault-0 -- vault login ``` 5. Enable the KV secrets engine at `secret/`: - ```sh + ```bash kubectl exec -it vault-0 -- vault secrets enable -path=secret kv-v2 ``` ## Configure Vault for Kubernetes authentication 1. Enable Kubernetes authentication: - ```sh + ```bash kubectl exec -it vault-0 -- vault auth enable kubernetes ``` 2. Enter the Vault shell: - ```sh + ```bash kubectl exec -it vault-0 -- sh ``` 3. Paste the following configuration to configure Vault with the Kubernetes API: @@ -99,11 +99,11 @@ Vault is used to secure, store and protect secrets and other sensitive data usin Replace `` with the IP address of your Vault Pod. You can retrieve it using the `kubectl get svc` command. The Pod name corresponds to your application name (e.g. if your application is called vault-application, the Pod name will be `application-vault`). 4. Enter the Vault shell: - ```sh + ```bash kubectl exec -it vault-0 -- sh ``` 5. Paste the following configuration to create a policy: - ```sh + ```bash vault policy write myapp-kv-ro -< another-text-file.txt ``` 3. Display the contents of the new text file: - ```sh + ```bash cat another-text-file.txt ``` The contents of the new text file display on the command line. Of course, the contents are identical, as it is simply a copy of the first file: - ```sh + ```bash Hello world! This is my first text file in Linux ``` @@ -265,19 +265,19 @@ You can move files and directories with the `mv` (**m**o**v**e) and `cp` (**c**o 1. Create a new directory called **dir-2** with the following command, as we saw earlier: - ```sh + ```bash mkdir dir-2 ``` 2. Make sure you are in the same working directory as the file `my-text-file.txt` that you previously created, and then copy into the directory created in step 1 with the following command: - ```sh + ```bash cp my-text-file.txt dir-2 ``` 3. Use `cd` and `list` to move into the new directory and check that the text file has been copied: - ```sh + ```bash cd dir-2 ls ``` @@ -286,7 +286,7 @@ You can move files and directories with the `mv` (**m**o**v**e) and `cp` (**c**o 4. Use `cd` to move back to the previous directory, and check that the original text file still exists there too: - ```sh + ```bash cd .. ls ``` @@ -295,7 +295,7 @@ You can move files and directories with the `mv` (**m**o**v**e) and `cp` (**c**o 5. Make a new text file with the following command. Notice that here instead of using `touch` to create an empty file, we use `nano` to directly create and edit the new file: - ```sh + ```bash nano second-text-file.txt ``` @@ -310,19 +310,19 @@ You can move files and directories with the `mv` (**m**o**v**e) and `cp` (**c**o 7. Use `mv` to move `second-text-file.txt` into the `dir-2` directory you previously created: - ```sh + ```bash mv second-text-file.txt dir-2 ``` 8. List contents of the current directory, to see that `second-text-file.txt` is no longer there: - ```sh + ```bash ls ``` 9. Use the following commands to change your working directory to `dir-2` and see that `second-text-file.txt` has been moved there: - ```sh + ```bash cd dir-2 ls ``` @@ -351,7 +351,7 @@ You can remove files and directories with the `rm` (**r**e**m**ove) command. 1. Navigate to the directory where `my-text-file.txt` is located, and delete the file with the following command: - ```sh + ```bash rm my-text-file.txt ``` @@ -364,7 +364,7 @@ You can remove files and directories with the `rm` (**r**e**m**ove) command. 3. Delete the `dir-2 directory` (which we presume still has other files inside): - ```sh + ```bash rm -r dir-2 ``` @@ -396,7 +396,7 @@ You need to use [sudo](#running-commands-as-the-superuser-sudo) for these comman 1. Update the software packages on your system with the following command: - ```sh + ```bash sudo apt update ``` @@ -404,7 +404,7 @@ You need to use [sudo](#running-commands-as-the-superuser-sudo) for these comman 2. Upgrade the software packages on your system with the following command: - ```sh + ```bash sudo apt upgrade ``` @@ -412,7 +412,7 @@ You need to use [sudo](#running-commands-as-the-superuser-sudo) for these comman You will probably see a message like this: - ```sh + ```bash After this operation, 19,5 kB of additional disk space will be used. Do you want to continue? [Y/n] ``` @@ -445,13 +445,13 @@ When creating user accounts, you need to either be logged in as `root`, or else 1. Create a new user with the `adduser` command. Replace **sarah** with the username of your choice. - ```sh + ```bash sudo adduser k8s ``` You will be prompted to add a **password** for this user. Then you will be prompted to add optional information such as first name, surname, and telephone number. You can hit enter to skip adding each optional piece of information. Hit `y` when prompted to confirm and create the user. - ```sh + ```bash Adding user `sarah' ... Adding new group `sarah' (1003) ... Adding new user `sarah' (1003) with group `sarah' ... diff --git a/tutorials/getting-started-with-kops-on-scaleway/index.mdx b/tutorials/getting-started-with-kops-on-scaleway/index.mdx index db2f092417..df7770a29a 100644 --- a/tutorials/getting-started-with-kops-on-scaleway/index.mdx +++ b/tutorials/getting-started-with-kops-on-scaleway/index.mdx @@ -38,7 +38,7 @@ With kOps, you can easily create, upgrade, and maintain highly available cluster Before working on the tutorial, it is important to set the following [environment variables](https://github.com/scaleway/scaleway-sdk-go/blob/master/scw/README.md) on your local computer. -```sh +```bash export SCW_ACCESS_KEY="my-access-key" export SCW_SECRET_KEY="my-secret-key" export SCW_DEFAULT_PROJECT_ID="my-project-id" diff --git a/tutorials/hestiacp/index.mdx b/tutorials/hestiacp/index.mdx index 61d6df5834..dd75ffdcdb 100644 --- a/tutorials/hestiacp/index.mdx +++ b/tutorials/hestiacp/index.mdx @@ -59,7 +59,7 @@ In this tutorial, you will learn how to install and configure HestiaCP on Ubuntu ``` bash hst-install.sh ``` - ```sh + ```bash _ _ _ _ ____ ____ | | | | ___ ___| |_(_) __ _ / ___| _ \ | |_| |/ _ \/ __| __| |/ _` | | | |_) | diff --git a/tutorials/how-to-implement-rag-generativeapis/index.mdx b/tutorials/how-to-implement-rag-generativeapis/index.mdx index ff71220a57..f4bc69f78f 100644 --- a/tutorials/how-to-implement-rag-generativeapis/index.mdx +++ b/tutorials/how-to-implement-rag-generativeapis/index.mdx @@ -43,7 +43,7 @@ In this tutorial, you will learn how to implement RAG using LangChain, a leading Run the following command to install the required MacOS packages to analyze PDF files and connect to PostgreSQL using Python: - ```sh + ```bash brew install libmagic poppler tesseract qpdf libpq python3-dev ``` @@ -51,7 +51,7 @@ Run the following command to install the required MacOS packages to analyze PDF Run the following command to install the required Debian/Ubuntu packages to analyze PDF files and connect to PostgreSQL using Python: - ```sh + ```bash sudo apt-get install libmagic-dev tesseract-ocr poppler-utils qpdf libpq-dev python3-dev build-essential python3-opencv ``` @@ -59,13 +59,13 @@ Run the following command to install the required Debian/Ubuntu packages to anal Once you have installed prerequisites for your OS, run the following command to install the required Python packages: - ```sh + ```bash pip install langchain langchainhub langchain_openai langchain_community langchain_postgres unstructured "unstructured[pdf]" libmagic python-dotenv psycopg2 boto3 ``` This command will install the latest version for all packages. If you want to limit dependencies conflicts risks, you can install the following specific versions instead: - ```sh + ```bash pip install langchain==0.3.9 langchainhub==0.1.21 langchain-openai==0.2.10 langchain-community==0.3.8 langchain-postgres==0.0.12 unstructured==0.16.8 "unstructured[pdf]" libmagic==1.0 python-dotenv==1.0.1 psycopg2==2.9.10 boto3==1.35.71 ``` @@ -73,7 +73,7 @@ Once you have installed prerequisites for your OS, run the following command to Create a `.env` file and add the following variables. These will store your API keys, database connection details, and other configuration values. - ```sh + ```bash # .env file # Scaleway API credentials https://console.scaleway.com/iam/api-keys @@ -183,13 +183,13 @@ Edit `embed.py` to load all files in your bucket using `S3DirectoryLoader`, spli You can now run you vector embedding script with: - ```sh + ```bash python embed.py ``` You should see the following output for all files embedding loaded successfully in your Managed Database Instance: - ```sh + ```bash Vectors successfully added for document s3://{bucket_name}/{file_name} ``` @@ -269,12 +269,12 @@ Edit `rag.py` to configure the LLM client using `ChatOpenAI` and create a simple You can now execute your RAG pipeline with the following command: - ```sh + ```bash python rag.py ``` If you used the Scaleway cheatsheet provided as examples and asked for a CLI command to power of instance, you should see the following answer: - ```sh + ```bash scw instance server stop example-28f3-4e91-b2af-4c3502562d72 ``` @@ -287,7 +287,7 @@ You can now execute your RAG pipeline with the following command: Note that vector embedding enabled the system to retrieve proper document chunks even if the Scaleway cheatsheet never mentions `shut down` but only `power off`. You can compare this result without RAG (for instance, by using the same prompt in [Generative APIs Playground](https://console.scaleway.com/generative-api/models/fr-par/playground?modelName=llama-3.1-8b-instruct)): - ```sh + ```bash scaleway instance shutdown --instance-uuid example-28f3-4e91-b2af-4c3502562d72 ``` @@ -350,7 +350,7 @@ Replace the `rag.py` content with the following: You can now execute your custom RAG pipeline with the following command: - ```sh + ```bash python rag.py ``` diff --git a/tutorials/how-to-implement-rag/index.mdx b/tutorials/how-to-implement-rag/index.mdx index b57367f0db..929bc5836f 100644 --- a/tutorials/how-to-implement-rag/index.mdx +++ b/tutorials/how-to-implement-rag/index.mdx @@ -43,14 +43,14 @@ LangChain simplifies the process of enhancing language models with retrieval cap Run the following command to install the required packages: - ```sh + ```bash pip install langchain psycopg2 python-dotenv ``` ### Create a .env file Create a .env file and add the following variables. These will store your API keys, database connection details, and other configuration values. - ```sh + ```bash # .env file # Scaleway API credentials diff --git a/tutorials/install-openvpn/index.mdx b/tutorials/install-openvpn/index.mdx index 75d38a8439..016e6123df 100644 --- a/tutorials/install-openvpn/index.mdx +++ b/tutorials/install-openvpn/index.mdx @@ -29,35 +29,35 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh ## Installing OpenVPN and Easy-RSA 1. Connect to your Instance via SSH. - ```sh + ```bash root@ ``` 2. Update the package list and upgrade already installed packages: - ```sh + ```bash apt update apt upgrade -y ``` 3. Install OpenVPN and Easy-RSA using `apt`: - ```sh + ```bash apt install -y openvpn easy-rsa ``` ## Setting up the Certificate Authority (CA) 1. Create a directory for Easy-RSA and navigate to it: - ```sh + ```bash mkdir -p ~/openvpn-ca cd ~/openvpn-ca ``` 2. Initialize the Public Key Infrastructure (PKI): - ```sh + ```bash cp -r /usr/share/easy-rsa/* /etc/openvpn/easy-rsa/ cd /etc/openvpn/easy-rsa/ ./easyrsa init-pki ``` 3. Build the Certificate Authority (CA): - ```sh + ```bash ./easyrsa build-ca ``` You will be prompted to set a passphrase and provide a Common Name (e.g., "OpenVPN-CA"). @@ -65,30 +65,30 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh ## Generating server and client certificates 1. Generate the server certificate and key: - ```sh + ```bash ./easyrsa gen-req server nopass ./easyrsa sign-req server server ``` Approve the signing request when prompted. 2. Generate Diffie-Hellman parameters: - ```sh + ```bash ./easyrsa gen-dh ``` 3. Generate a shared secret for additional security: - ```sh + ```bash openvpn --genkey secret /etc/openvpn/ta.key ``` ## Configuring the OpenVPN Server 1. Copy the necessary files to the OpenVPN directory: - ```sh + ```bash cp pki/ca.crt pki/private/server.key pki/issued/server.crt /etc/openvpn/ cp /etc/openvpn/easy-rsa/pki/dh.pem /etc/openvpn/ cp /etc/openvpn/ta.key /etc/openvpn/ ``` 2. Create the OpenVPN server configuration file: - ```sh + ```bash nano /etc/openvpn/server.conf ``` 3. Add the following configuration: @@ -122,17 +122,17 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh ## Enabling IP forwarding and configuring the firewall 1. Enable IP forwarding: - ```sh + ```bash echo 'net.ipv4.ip_forward=1' | tee -a /etc/sysctl.conf sysctl -p ``` 2. Configure the firewall ([UFW](/tutorials/installation-uncomplicated-firewall/)): - ```sh + ```bash ufw allow 1194/udp ufw allow OpenSSH ``` 3. Edit the UFW configuration to allow forwarding: - ```sh + ```bash nano /etc/ufw/before.rules ``` 4. Add the following lines before the `*filter` line: @@ -143,7 +143,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh COMMIT ``` 5. Save and exit, then reload UFW: - ```sh + ```bash ufw disable ufw enable ``` @@ -151,13 +151,13 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh ## Starting the OpenVPN server 1. Start and enable the OpenVPN service: - ```sh + ```bash systemctl start openvpn@server systemctl enable openvpn@server ``` 2. Check the status of the OpenVPN service: - ```sh + ```bash systemctl status openvpn@server ``` Ensure it is active and running. @@ -166,7 +166,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh 1. Generate client certificates: - ```sh + ```bash cd /etc/openvpn/easy-rsa/ ./easyrsa gen-req client1 nopass ./easyrsa sign-req client client1 @@ -175,7 +175,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh 2. Create the client configuration file: On your server, create a new client configuration file named `client1.ovpn`: - ```sh + ```bash nano ~/client1.ovpn ``` 3. Add the following configuration in the file, replacing `your_server_ip_or_domain` with your server's IP address or domain name: @@ -217,7 +217,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh Replace the placeholder text (e.g., `# Insert the content of /etc/openvpn/ca.crt here`) with the actual contents of the respective files. You can use the `cat` command to display the contents of each file and then copy and paste them into the appropriate sections of the `client1.ovpn` file. - For example: - ```sh + ```bash cat /etc/openvpn/ca.crt ``` Copy the output and paste it between the `` and `` tags in the `client1.ovpn` file. @@ -225,7 +225,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh 4. Transfer the client configuration file to the client device: Use a secure method to transfer the `client1.ovpn` file to the device you intend to use as a client. You can use `scp` (secure copy) for this purpose: - ```sh + ```bash scp ~/client1.ovpn user@client_device_ip:/path/to/destination/ ``` Replace `user` with your username on the client device, `client_device_ip` with the client's IP address, and `/path/to/destination/` with the desired directory on the client device. @@ -233,7 +233,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh Ensure that the OpenVPN client is installed on your client device. Installation methods vary depending on the operating system: - **Linux:** - ```sh + ```bash apt update apt install -y openvpn ``` @@ -251,7 +251,7 @@ Learn how to install and configure OpenVPN on Ubuntu 24.04 LTS with this compreh - **Linux:** Use the following command to start the VPN connection: - ```sh + ```bash openvpn --config /path/to/client1.ovpn ``` @@ -267,7 +267,7 @@ Your OpenVPN server is now configured on your Ubuntu 24.04 LTS instance, and you ## Maintenance For ongoing maintenance, remember to renew your Let's Encrypt certificates regularly (they expire every 90 days). You can automate this process with a cron job: -```sh +```bash echo "0 0 1 */2 * certbot renew --quiet" | tee -a /etc/crontab ``` This cron job runs the `certbot renew` command on the first day of every second month at midnight. diff --git a/tutorials/migrate-dedibox-to-elastic-metal/index.mdx b/tutorials/migrate-dedibox-to-elastic-metal/index.mdx index c650dbe750..f86c8be0e8 100644 --- a/tutorials/migrate-dedibox-to-elastic-metal/index.mdx +++ b/tutorials/migrate-dedibox-to-elastic-metal/index.mdx @@ -46,7 +46,7 @@ We use **Duplicity** to encrypt the backup and upload it to Object Storage. Then Run the following commands to update your system and install Duplicity: -```sh +```bash apt update && apt upgrade -y apt install -y python3-boto3 python3-pip haveged gettext librsync-dev pipx python3 -m pip install --upgrade pip @@ -57,13 +57,13 @@ python3 -m pip install --upgrade pip Choose one of the following installation methods, depending on whether you want to install for all users or just the current user: #### Install for all users (recommended) -```sh +```bash sudo pipx --global install duplicity ``` This will install Duplicity in `/usr/local/bin/duplicity` and its dependencies in `/opt/pipx/venvs/duplicity`. #### Install for current user only -```sh +```bash pipx install duplicity ``` This will install Duplicity in `~/.local/bin/duplicity` and its dependencies in `~/.local/pipx/venvs/duplicity`. @@ -77,7 +77,7 @@ For more information, visit the [Duplicity GitLab page](https://gitlab.com/dupli ## Creating a GPG key 1. Generate the GPG key: - ```sh + ```bash gpg --full-generate-key ``` Use default settings: @@ -86,25 +86,25 @@ For more information, visit the [Duplicity GitLab page](https://gitlab.com/dupli - Expiration: **0 (never expires)** - Assign a name, email, and comment. 2. Retrieve the GPG Key fingerprint: - ```sh + ```bash gpg --list-keys ``` ## Transferring the GPG key to the Elastic Metal server 1. Export the GPG private key: - ```sh + ```bash gpg --export-secret-key --armor "your-key-id" > ~/my-key.asc ``` 2. Securely transfer the key: - ```sh + ```bash scp ~/my-key.asc root@:/root/ ``` ## Backing up your Dedibox 1. Create the necessary files and directories: - ```sh + ```bash touch scw-backup.sh .scw-configrc chmod 700 scw-backup.sh chmod 600 .scw-configrc @@ -112,7 +112,7 @@ For more information, visit the [Duplicity GitLab page](https://gitlab.com/dupli touch /var/log/duplicity/logfile{.log,-recent.log} ``` 2. Add the following configurations to `.scw-configrc`: - ```sh + ```bash export AWS_ACCESS_KEY_ID="" export AWS_SECRET_ACCESS_KEY="" export SCW_BUCKET="s3://s3.fr-par.scw.cloud/" @@ -123,37 +123,37 @@ For more information, visit the [Duplicity GitLab page](https://gitlab.com/dupli export LOGFILE="/var/log/duplicity/logfile.log" ``` 3. Backup script (`scw-backup.sh`): - ```sh + ```bash #!/bin/bash source .scw-configrc duplicity full --encrypt-key=${GPG_FINGERPRINT} ${SOURCE} ${SCW_BUCKET} ``` 4. Run the backup: - ```sh + ```bash ./scw-backup.sh ``` ## Restoring data on your Elastic Metal server 1. Install required packages: - ```sh + ```bash apt update && apt upgrade -y apt install -y python3-boto3 python3-pip gettext librsync-dev pipx python3 -m pip install --upgrade pip sudo pipx --global install duplicity ``` 2. Import the GPG key: - ```sh + ```bash gpg --import ~/my-key.asc ``` 3. Restore script (`scw-restore.sh`): - ```sh + ```bash #!/bin/bash source .scw-configrc duplicity restore ${SCW_BUCKET} /destination/folder/ ``` 4. Execute the restore script: - ```sh + ```bash ./scw-restore.sh ``` diff --git a/tutorials/migrating-docker-workloads-to-kubernetes-kapsule/index.mdx b/tutorials/migrating-docker-workloads-to-kubernetes-kapsule/index.mdx index 62c3982793..f6086247b3 100644 --- a/tutorials/migrating-docker-workloads-to-kubernetes-kapsule/index.mdx +++ b/tutorials/migrating-docker-workloads-to-kubernetes-kapsule/index.mdx @@ -184,7 +184,7 @@ This section outlines the settings for your cluster pools. You can configure as ### 4.2 Set up kubeconfig environment variable -```sh +```bash export KUBECONFIG=~/.kube/kapsule-config ``` @@ -250,7 +250,7 @@ spec: ### 6.1 Apply the deployment and service -```sh +```bash kubectl apply -f deployment.yaml kubectl apply -f service.yaml @@ -270,7 +270,7 @@ kubectl get services ### 7.1 Get the external IP address -```sh +```bash kubectl get service my-app-service ``` - Wait until the **EXTERNAL-IP** field is populated (may take a few minutes). @@ -287,7 +287,7 @@ kubectl get service my-app-service #### Set up Ingress Controller (Optional) For advanced routing and SSL termination, [deploy an ingress controller](/kubernetes/how-to/deploy-ingress-controller/) like **NGINX Ingress**. -```sh +```bash kubectl apply -f ``` @@ -330,7 +330,7 @@ spec: #### Apply PVC manifest -```sh +```bash kubectl apply -f pvc.yaml kubectl apply -f deployment.yaml ``` @@ -364,7 +364,7 @@ spec: Apply the HPA: -```sh +```bash kubectl apply -f hpa.yaml ``` diff --git a/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx b/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx index 9acb8a36bf..93c8ae456a 100644 --- a/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx +++ b/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx @@ -93,7 +93,7 @@ If you do not already have one, [sign up for a Scaleway account](https://account [Installing the Scaleway CLI](https://github.com/scaleway/scaleway-cli) can simplify some tasks. Run the following command in a terminal to install the Scaleway CLI: -```sh +```bash curl -s | sh scw init ``` @@ -122,7 +122,7 @@ Your new cluster will need access to your container images. Use the following command to log in to your Scaleway Registry using Docker: -```sh +```bash docker login rg..scw.cloud ``` @@ -134,7 +134,7 @@ docker login rg..scw.cloud For each image, you need to migrate: -```sh +```bash # Pull the image from your existing registry docker pull /: @@ -231,7 +231,7 @@ To create and configure a new Kapsule Kubernetes cluster, follow the steps below ### 5.2 Update kubeconfig -```sh +```bash export KUBECONFIG=~/.kube/kapsule-config:~/.kube/config kubectl config view --flatten > ~/.kube/config_combined @@ -274,7 +274,7 @@ Your existing manifests may contain cloud-provider-specific settings that need a - Update storage classes to match Scaleway's offerings. - List available storage classes: - ```sh + ```bash kubectl get storageclass ``` - Common storage classes in Scaleway: diff --git a/tutorials/monitor-gpu-instance-cockpit/index.mdx b/tutorials/monitor-gpu-instance-cockpit/index.mdx index 6a19107b59..5785134383 100644 --- a/tutorials/monitor-gpu-instance-cockpit/index.mdx +++ b/tutorials/monitor-gpu-instance-cockpit/index.mdx @@ -57,7 +57,7 @@ We are creating a Cockpit data source because your GPU Instance's metrics will b 1. [Connect to your GPU Instance through SSH](/gpu/how-to/create-manage-gpu-instance/#how-to-connect-to-a-gpu-instance). 2. Copy and paste the following command to create a configuration file named `config.alloy` in your Instance: - ```sh + ```bash touch config.alloy ``` 3. Copy and paste the following template inside `config.alloy`: @@ -109,7 +109,7 @@ We are creating a Cockpit data source because your GPU Instance's metrics will b 5. Copy and paste the following command to create a `docker-compose.yaml` file in your Instance: - ```sh + ```bash touch docker-compose.yaml ``` 6. Copy and paste the following configuration inside `docker-compose.yaml`, save it and exit the file. diff --git a/tutorials/nats-rdb-offload/index.mdx b/tutorials/nats-rdb-offload/index.mdx index 3f2ee459eb..0b6010cc8d 100644 --- a/tutorials/nats-rdb-offload/index.mdx +++ b/tutorials/nats-rdb-offload/index.mdx @@ -61,7 +61,7 @@ If not, go back to the NATS CLI documentation to properly create a Scaleway cont 1. Open a new terminal window from your project directory. 2. Install the following dependencies for the tutorial: - ```sh + ```bash sudo apt-get install mysql-server pip install mysql-connector-python pip install pynacl @@ -186,11 +186,11 @@ Your architecture is now ready: To see the flow running, you can follow these steps: 1. Open your terminal window dedicated to the subscriber and run the Python file: - ```sh + ```bash python nats_subscriber_with_mysql.py ``` 2. Open your terminal window dedicated to the publisher and run the python file: - ```sh + ```bash python nats_publisher.py ``` 3. Go back to your terminal window of the subscriber. You can see that the message has been received by the application and sent to the database. @@ -208,7 +208,7 @@ To be sure that you are not running resources without using them, you can delete You can also delete your NATS server with the following command in your terminal: -```sh +```bash nats-server --signal stop ``` diff --git a/tutorials/nextjs-app-serverless-functions-sqldb/index.mdx b/tutorials/nextjs-app-serverless-functions-sqldb/index.mdx index 2ec64903ad..96188782cd 100644 --- a/tutorials/nextjs-app-serverless-functions-sqldb/index.mdx +++ b/tutorials/nextjs-app-serverless-functions-sqldb/index.mdx @@ -37,14 +37,14 @@ You can either deploy your application: ### Initializing the project 1. Run the command below in a terminal to export your API access and secret keys as environment variables: - ```sh + ```bash export SCW_ACCESS_KEY=$(scw config get access-key) export SCW_SECRET_KEY=$(scw config get secret-key) ``` 2. Run the command below to make sure the environment variables are properly set: - ```sh + ```bash scw info ``` @@ -153,7 +153,7 @@ You can either deploy your application: docker login $REGISTRY_ENDPOINT -u nologin --password-stdin <<< "$SCW_SECRET_KEY" ``` The following output displays: - ```sh + ```bash Login Succeeded ``` @@ -177,7 +177,7 @@ You can either deploy your application: The first deployment can take up to two minutes. You can check the deployment status with the following command: - ```sh + ```bash scw container container list name=my-nextjs-blog ``` When the status appears as `ready`, you can access the website via your browser. @@ -576,14 +576,14 @@ To secure your deployment, we will now add a dedicated [IAM application](/iam/co ### Initialize the project 1. Run the command below to export your access key and secret key as environment variables: - ```sh + ```bash export SCW_ACCESS_KEY=$(scw config get access-key) export SCW_SECRET_KEY=$(scw config get secret-key) ``` 2. Run the command below to make sure the environment variables are properly set: - ```sh + ```bash scw info ``` @@ -646,7 +646,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/iam/co docker login $REGISTRY_ENDPOINT -u nologin --password-stdin <<< "$SCW_SECRET_KEY" ``` The following output displays: - ```sh + ```bash Login Succeeded ``` diff --git a/tutorials/pihole-vpn/index.mdx b/tutorials/pihole-vpn/index.mdx index 996baeeb14..7fed7e6578 100644 --- a/tutorials/pihole-vpn/index.mdx +++ b/tutorials/pihole-vpn/index.mdx @@ -36,18 +36,18 @@ This guide will show you how to: 1. Log in to the [Scaleway console](https://console.scaleway.com) and **create a new Instance**. 2. Choose **Ubuntu 22.04 LTS** as the operating system. 3. Once the Instance is created, connect to it via SSH: - ```sh + ```bash ssh root@your_instance_ip ``` 4. Update and upgrade your system: - ```sh + ```bash apt update && apt upgrade -y ``` ## Installing Pi-hole 1. Download and run the installer: - ```sh + ```bash wget -O basic-install.sh https://install.pi-hole.net chmod +x basic-install.sh ./basic-install.sh @@ -57,17 +57,17 @@ This guide will show you how to: - Choose **IPv4 + IPv6 filtering** - Install the **Pi-hole Web Interface** - Set a **strong password** using: - ```sh + ```bash pihole -a -p ``` 3. Configure Pi-hole for local access only: - ```sh + ```bash pihole -a -i local ``` ### Optimizing Pi-hole To enhance privacy, you can set up **Unbound**, a local recursive DNS resolver: -```sh +```bash apt install unbound -y ``` Then, edit Pi-hole settings to use `127.0.0.1#5335` as your custom upstream DNS. @@ -76,7 +76,7 @@ Then, edit Pi-hole settings to use `127.0.0.1#5335` as your custom upstream DNS. PiVPN allows us to configure a VPN server with either **OpenVPN** or **WireGuard**. Run the following commands to install PiVPN on your Instance. -```sh +```bash wget -O pivpn-install.sh https://install.pivpn.io chmod +x pivpn-install.sh ./pivpn-install.sh @@ -95,7 +95,7 @@ Follow the setup prompts and select: ### Firewall configuration Restrict access to only necessary services: -```sh +```bash ufw allow 22/tcp ufw allow 53/udp ufw allow 4343/tcp # If using OpenVPN on port 4343 @@ -105,28 +105,28 @@ ufw enable ### Change OpenVPN default port Edit OpenVPN’s configuration file: -```sh +```bash nano /etc/openvpn/server.conf ``` Change `port 1194` to `port 4343` (or another port of your choice), then restart OpenVPN: -```sh +```bash systemctl restart openvpn ``` ### Enable Fail2Ban Prevent brute-force attacks by installing Fail2Ban: -```sh +```bash apt install fail2ban -y systemctl enable fail2ban --now ``` ## Adding VPN users For OpenVPN: -```sh +```bash pivpn add ``` For WireGuard: -```sh +```bash pivpn wg add ``` Download the VPN configuration file securely using SCP or SFTP. diff --git a/tutorials/power-on-off-instances-jobs/index.mdx b/tutorials/power-on-off-instances-jobs/index.mdx index afd96c1989..3269b4d0ab 100644 --- a/tutorials/power-on-off-instances-jobs/index.mdx +++ b/tutorials/power-on-off-instances-jobs/index.mdx @@ -99,7 +99,7 @@ Serverless Jobs are perfectly adapted for these autonomous tasks, as we do not n 9. In the **Execution** tab, define the command below, and replace the placeholder with the ID of your Instance: - ```sh + ```bash /scw instance server stop 11111111-1111-1111-1111-111111111111 ``` diff --git a/tutorials/run-manage-linux-vm-on-apple-silicon-tart/index.mdx b/tutorials/run-manage-linux-vm-on-apple-silicon-tart/index.mdx index f3fb137ec6..a6b66ded9f 100644 --- a/tutorials/run-manage-linux-vm-on-apple-silicon-tart/index.mdx +++ b/tutorials/run-manage-linux-vm-on-apple-silicon-tart/index.mdx @@ -44,12 +44,12 @@ In this tutorial, we will use [Homebrew](https://brew.sh/index), which is a popu ## Installing Tart and a first VM on macOS 1. Install Tart using Homebrew. Open your terminal and run the following command to install Tart on your Mac mini using Homebrew. - ```sh + ```bash brew install cirruslabs/cli/tart ``` 2. Clone the desired VM image from the available MacOS images on Tart's GitHub repository. For example, to run the MacOS Sonoma image, use the following commands: - ```sh + ```bash tart clone ghcr.io/cirruslabs/macos-sonoma-base:latest sonoma-base tart run sonoma-base ``` @@ -73,7 +73,7 @@ Currently, Tart supports the following Linux images: * Fedora: `ghcr.io/cirruslabs/fedora:latest` 1. Clone the Ubuntu image and resize its disk size to 50GB using the following commands: - ```sh + ```bash tart clone ghcr.io/cirruslabs/ubuntu:latest ubuntu tart set ubuntu --disk-size 50 ``` @@ -82,7 +82,7 @@ Currently, Tart supports the following Linux images: 2. Run the resized Ubuntu image and log in with the provided credentials. - ```sh + ```bash tart run ubuntu ``` diff --git a/tutorials/run-nodejs-express-server-on-serverless-containers/index.mdx b/tutorials/run-nodejs-express-server-on-serverless-containers/index.mdx index 5fbf65aab3..fa41115454 100644 --- a/tutorials/run-nodejs-express-server-on-serverless-containers/index.mdx +++ b/tutorials/run-nodejs-express-server-on-serverless-containers/index.mdx @@ -151,7 +151,7 @@ To Dockerize our simple web app, we will use the official Node.js image. ## Deploying the application on Serverless Containers 1. Build your image using the following command: - ```sh + ```bash docker build . -t ``` 2. [Push](/container-registry/how-to/push-images/) the created image into the [Container Registry](/container-registry/quickstart/) linked to your Containers namespace diff --git a/tutorials/run-python-flask-server-on-serverless-container/index.mdx b/tutorials/run-python-flask-server-on-serverless-container/index.mdx index e31f1ae033..cc149e75ab 100644 --- a/tutorials/run-python-flask-server-on-serverless-container/index.mdx +++ b/tutorials/run-python-flask-server-on-serverless-container/index.mdx @@ -134,7 +134,7 @@ To Dockerize our simple web app, we will use the official Python 3 Alpine image ## Deploying the application on Serverless Containers 1. Build your image using the following command: - ```sh + ```bash docker build . -t ``` 2. [Push](/container-registry/how-to/push-images/) the created image into the [container registry](/container-registry/quickstart/) linked to your Containers namespace diff --git a/tutorials/s3cmd/index.mdx b/tutorials/s3cmd/index.mdx index 11e86bf759..87d3f18620 100644 --- a/tutorials/s3cmd/index.mdx +++ b/tutorials/s3cmd/index.mdx @@ -273,6 +273,6 @@ s3cmd delcors s3://bucketname ## Going further For more information about the different s3cmd commands, refer to the [official documentation](https://s3tools.org/usage), or run the following command in a terminal: -```sh +```bash s3cmd --help ``` \ No newline at end of file diff --git a/tutorials/scaleway-packer-plugin/index.mdx b/tutorials/scaleway-packer-plugin/index.mdx index a47a7491f3..03199a2826 100644 --- a/tutorials/scaleway-packer-plugin/index.mdx +++ b/tutorials/scaleway-packer-plugin/index.mdx @@ -32,7 +32,7 @@ Starting from version 1.7, Packer supports a new `packer init` command allowing To install this plugin, copy and paste this code into your Packer configuration. Then, run `packer init`. -```sh +```bash packer { required_plugins { scaleway = { diff --git a/tutorials/send-sms-iot-device-twilio/index.mdx b/tutorials/send-sms-iot-device-twilio/index.mdx index 4c75c9dbcf..1f809877e7 100644 --- a/tutorials/send-sms-iot-device-twilio/index.mdx +++ b/tutorials/send-sms-iot-device-twilio/index.mdx @@ -61,7 +61,7 @@ We are going to do things in reverse order: In the text area below you should see something like: - ```sh + ```bash curl 'https://api.twilio.com/2010-04-01/Accounts//Messages.json' -X POST \ --data-urlencode 'To=+01234567890' \ --data-urlencode 'From=+12345678901' \ diff --git a/tutorials/snapshot-instances-jobs/index.mdx b/tutorials/snapshot-instances-jobs/index.mdx index c677ffaa3b..34117a6b72 100644 --- a/tutorials/snapshot-instances-jobs/index.mdx +++ b/tutorials/snapshot-instances-jobs/index.mdx @@ -58,7 +58,7 @@ Serverless Jobs are perfectly adapted for these autonomous tasks, as we do not n For more details about variables used by `cli`, refer to the [CLI config documentation](https://github.com/scaleway/scaleway-cli/blob/master/docs/commands/config.md). 9. In the **Execution** tab, define the command below, and replace the placeholder with the ID of your Block Storage volume: - ```sh + ```bash /scw block snapshot create volume-id=11111111-1111-1111-1111-111111111111 ``` diff --git a/tutorials/snapshot-managed-databases/index.mdx b/tutorials/snapshot-managed-databases/index.mdx index 3ce511605b..477a4c0541 100644 --- a/tutorials/snapshot-managed-databases/index.mdx +++ b/tutorials/snapshot-managed-databases/index.mdx @@ -60,12 +60,12 @@ Serverless Jobs are perfectly adapted for these autonomous tasks, as we do not n 9. In the **Execution** tab, define the command below, and replace the placeholder with the ID of your Database Instance: - ```sh + ```bash /scw rdb backup create instance-id=11111111-1111-1111-1111-111111111111 database-name=YOUR_DB_NAME ``` - ```sh + ```bash /scw rdb snapshot create instance-id=11111111-1111-1111-1111-111111111111 ``` diff --git a/tutorials/strapi-app-serverless-containers-sqldb/index.mdx b/tutorials/strapi-app-serverless-containers-sqldb/index.mdx index caa2899b38..860ee57e10 100644 --- a/tutorials/strapi-app-serverless-containers-sqldb/index.mdx +++ b/tutorials/strapi-app-serverless-containers-sqldb/index.mdx @@ -37,14 +37,14 @@ You can either deploy your application: ### Initializing the project 1. Run the command below in a terminal to export your API access and secret keys as environment variables: - ```sh + ```bash export SCW_ACCESS_KEY=$(scw config get access-key) export SCW_SECRET_KEY=$(scw config get secret-key) ``` 2. Run the command below to make sure the environment variables are properly set: - ```sh + ```bash scw info ``` @@ -234,7 +234,7 @@ You can either deploy your application: docker login $REGISTRY_ENDPOINT -u nologin --password-stdin <<< "$SCW_SECRET_KEY" ``` The following output displays: - ```sh + ```bash Login Succeeded ``` @@ -278,7 +278,7 @@ You can either deploy your application: The first deployment can take a few minutes. You can check the deployment status with the following command: - ```sh + ```bash scw container container list name=my-strapi-blog -o human=Status ``` When the status appears as `ready`, you can access the Strapi Administration Panel via your browser. @@ -393,14 +393,14 @@ To secure your deployment, we will now add a dedicated [IAM application](/iam/co ### Initialize the project 1. Run the command below to export your access key and secret key as environment variables: - ```sh + ```bash export SCW_ACCESS_KEY=$(scw config get access-key) export SCW_SECRET_KEY=$(scw config get secret-key) ``` 2. Run the command below to make sure the environment variables are properly set: - ```sh + ```bash scw info ``` @@ -537,7 +537,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/iam/co docker login $REGISTRY_ENDPOINT -u nologin --password-stdin <<< "$SCW_SECRET_KEY" ``` The following output displays: - ```sh + ```bash Login Succeeded ``` @@ -762,7 +762,7 @@ The Terraform/OpenTofu file creates several resources: The first deployment can take a few minutes. You can check the deployment status with the following command: - ```sh + ```bash scw container container list name=tutorial-strapi-blog-tf -o human=Name,Status ``` When the status appears as `ready`, you can access the Strapi Administration Panel via your browser. diff --git a/tutorials/traefik-v2-cert-manager/index.mdx b/tutorials/traefik-v2-cert-manager/index.mdx index 623c2623da..b35d0eaf91 100644 --- a/tutorials/traefik-v2-cert-manager/index.mdx +++ b/tutorials/traefik-v2-cert-manager/index.mdx @@ -51,7 +51,7 @@ Our goal in this tutorial is to: In this step, we will create a wildcard DNS record to point to the external IP address of our Traefik load balancer. This DNS record will allow us to route traffic to our Kubernetes services using custom domain names. 1. Retrieve the external IP of your LoadBalancer: - ```sh + ```bash kubectl get svc traefik -n kube-system ``` The external IP will be listed under the `EXTERNAL-IP` column. @@ -80,18 +80,18 @@ In this step, we will create a wildcard DNS record to point to the external IP a ### Installing cert-manager 1. Install a recent version of cert-manager, in this example v1.16: - ```sh + ```bash kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.16.0/cert-manager.yaml ``` 2. Verify the installation: - ```sh + ```bash kubectl get pods --namespace cert-manager ``` ### Creating a Let's Encrypt issuer 1. Open a text editor and create a new file for the ClusterIssuer: - ```sh + ```bash nano cluster-issuer.yaml ``` 2. Add the following content to the `cluster-issuer.yaml` file: @@ -112,14 +112,14 @@ In this step, we will create a wildcard DNS record to point to the external IP a class: traefik ``` 3. Apply the issuer configuration: - ```sh + ```bash kubectl apply -f cluster-issuer.yaml ``` ### Creating and using a Let's Encrypt certificate 1. Open a text editor and create a new file for the certificate: - ```sh + ```bash nano mycert.yaml ``` 2. Add the following content to the `mycert.yaml` file: @@ -139,22 +139,22 @@ In this step, we will create a wildcard DNS record to point to the external IP a kind: ClusterIssuer ``` 3. Apply the certificate configuration: - ```sh + ```bash kubectl apply -f mycert.yaml ``` 4. Verify the certificate creation: - ```sh + ```bash kubectl describe certificate teacoffee-cert ``` ### Creating an HTTPS ingress 1. Deploy the "tea coffee" test application: - ```sh + ```bash kubectl create -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/main/examples/ingress-resources/complete-example/cafe.yaml 2. Open a text editor and create a new file for the HTTPS ingress object: - ```sh + ```bash nano mysite.yaml ``` 3. Add the following content to the `mysite.yaml` file: @@ -189,28 +189,28 @@ In this step, we will create a wildcard DNS record to point to the external IP a number: 80 ``` 4. Apply the HTTPS ingress configuration: - ```sh + ```bash kubectl apply -f mysite.yaml ``` 5. Test the HTTPS endpoint: - ```sh + ```bash curl -v https://teacoffee.mytest.com/tea ``` ### Accessing the Traefik dashboard 1. Retrieve the name of the Traefik Pod: - ```sh + ```bash kubectl get pods -n kube-system --selector "app.kubernetes.io/name=traefik" --output=name ``` An output similar to the following should display: `pod/traefik-xxxxxxxxx-yyyyy`. 2. Use the exact Pod name from the previous command to port-forward: - ```sh + ```bash kubectl port-forward -n kube-system 9000:9000 ``` For example: - ```sh + ```bash kubectl port-forward -n kube-system pod/traefik-xxxxxxxxx-yyyyy 9000:9000 ``` diff --git a/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx b/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx index 85d55ac5f4..f213402e17 100644 --- a/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx +++ b/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx @@ -137,7 +137,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri 6. Run the following command in the same terminal to initialize a new NPM project and create an empty `package.json` file: - ```sh + ```bash npm init --yes ``` @@ -289,7 +289,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri 4. Save the file and exit the code editor. 5. Run the following command in the same terminal to initialize a new NPM project and create an empty `package.json` file: - ```sh + ```bash npm init --yes ``` 6. Run the following command to download the required dependencies and packages: diff --git a/tutorials/upgrade-managed-postgresql-database/index.mdx b/tutorials/upgrade-managed-postgresql-database/index.mdx index 241289005c..8409536c06 100644 --- a/tutorials/upgrade-managed-postgresql-database/index.mdx +++ b/tutorials/upgrade-managed-postgresql-database/index.mdx @@ -67,7 +67,7 @@ There are three steps to completing a manual migration: creating a new PostgreSQ 1. Retrieve the database ID of your **old** Database Instance. You can find it on the **Database Instance Information** page of your Instance: 2. Make a backup of your logical database(s) using the API: - ```sh + ```bash curl -X POST -H "Content-Type: application/json" \ -H "X-Auth-Token: $SECRET_KEY" https://api.scaleway.com/rdb/v1//fr-par/backups -d '{ "instance_id": "", @@ -115,7 +115,7 @@ There are three steps to completing a manual migration: creating a new PostgreSQ 1. Retrieve the database ID of your **new** Database Instance. You can find it on the **Database Instance Information** page of your Instance. 2. Restore the backup of your logical database(s) using the API: - ```sh + ```bash curl -X POST -H "Content-Type: application/json" \ -H "X-Auth-Token: $SECRET_KEY" https://api.scaleway.com/rdb/v1/regions//backups//restore -d '{ "database_name": "",