diff --git a/ai-data/generative-apis/api-cli/understanding-errors.mdx b/ai-data/generative-apis/api-cli/understanding-errors.mdx
index 1b134fb5ad..4b298ae6a2 100644
--- a/ai-data/generative-apis/api-cli/understanding-errors.mdx
+++ b/ai-data/generative-apis/api-cli/understanding-errors.mdx
@@ -32,6 +32,8 @@ Below are usual HTTP error codes:
- 404 - **Route Not Found**: The requested resource could not be found. Check your request is being made to the correct endpoint.
- 422 - **Model Not Found**: The `model` key is present in the request payload, but the corresponding model is not found.
- 422 - **Missing Model**: The `model` key is missing from the request payload.
+- 429 - **Too Many Requests**: You are exceeding your current quota for the requested model, calculated in requests per minute. Find rate limits on [this page](/ai-data/generative-apis/reference-content/rate-limits/)
+- 429 - **Too Many Tokens**: You are exceeding your current quota for the requested model, calculated in tokens per minute. Find rate limits on [this page](/ai-data/generative-apis/reference-content/rate-limits/)
- 500 - **API error**: An unexpected internal error has occurred within Scaleway's systems. If the issue persists, please [open a support ticket](https://console.scaleway.com/support/tickets/create).
For streaming responses via SSE, 5xx errors may occur after a 200 response has been returned.
\ No newline at end of file
diff --git a/ai-data/generative-apis/api-cli/using-chat-api.mdx b/ai-data/generative-apis/api-cli/using-chat-api.mdx
index 2a2b55e64a..b1a1236731 100644
--- a/ai-data/generative-apis/api-cli/using-chat-api.mdx
+++ b/ai-data/generative-apis/api-cli/using-chat-api.mdx
@@ -68,23 +68,25 @@ Our chat API is OpenAI compatible. Use OpenAI’s [API reference](https://platfo
- max_tokens
- stream
- presence_penalty
-- response_format
+- [response_format](/ai-data/generative-apis/how-to/use-structured-outputs)
- logprobs
- stop
- seed
+- [tools](/ai-data/generative-apis/how-to/use-function-calling)
+- [tool_choice](/ai-data/generative-apis/how-to/use-function-calling)
### Unsupported parameters
- frequency_penalty
- n
- top_logprobs
-- tools
-- tool_choice
- logit_bias
- user
If you have a use case requiring one of these unsupported parameters, please [contact us via Slack](https://slack.scaleway.com/) on #ai channel.
-
- Go further with [Python code examples](/ai-data/generative-apis/how-to/query-text-models/#querying-text-models-via-api) to query text models using Scaleway's Chat API.
-
\ No newline at end of file
+## Going further
+
+1. [Python code examples](/ai-data/generative-apis/how-to/query-text-models/#querying-text-models-via-api) to query text models using Scaleway's Chat API.
+2. [How to use structured outputs](/ai-data/generative-apis/how-to/use-structured-outputs) with the `response_format` parameter
+3. [How to use function calling](/ai-data/generative-apis/how-to/use-function-calling) with `tools` and `tool_choice`
\ No newline at end of file
diff --git a/ai-data/generative-apis/concepts.mdx b/ai-data/generative-apis/concepts.mdx
index 271e06a440..8b2e247f41 100644
--- a/ai-data/generative-apis/concepts.mdx
+++ b/ai-data/generative-apis/concepts.mdx
@@ -20,6 +20,10 @@ API rate limits define the maximum number of requests a user can make to the Gen
A context window is the maximum amount of prompt data considered by the model to generate a response. Using models with high context length, you can provide more information to generate relevant responses. The context is measured in tokens.
+## Function calling
+
+Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format.
+
## Embeddings
Embeddings are numerical representations of text data that capture semantic information in a dense vector format. In Generative APIs, embeddings are essential for tasks such as similarity matching, clustering, and serving as inputs for downstream models. These vectors enable the model to understand and generate text based on the underlying meaning rather than just the surface-level words.
diff --git a/ai-data/generative-apis/how-to/query-text-models.mdx b/ai-data/generative-apis/how-to/query-language-models.mdx
similarity index 74%
rename from ai-data/generative-apis/how-to/query-text-models.mdx
rename to ai-data/generative-apis/how-to/query-language-models.mdx
index 7194ab4a7d..5e97e6c8df 100644
--- a/ai-data/generative-apis/how-to/query-text-models.mdx
+++ b/ai-data/generative-apis/how-to/query-language-models.mdx
@@ -1,25 +1,24 @@
---
meta:
- title: How to query text models
- description: Learn how to interact with powerful text models using Scaleway's Generative APIs service.
+ title: How to query language models
+ description: Learn how to interact with powerful language models using Scaleway's Generative APIs service.
content:
- h1: How to query text models
- paragraph: Learn how to interact with powerful text models using Scaleway's Generative APIs service.
-tags: generative-apis ai-data text-models
+ h1: How to query language models
+ paragraph: Learn how to interact with powerful language models using Scaleway's Generative APIs service.
+tags: generative-apis ai-data language-models
dates:
- validation: 2024-08-28
+ validation: 2024-09-30
posted: 2024-08-28
---
-Scaleway's Generative APIs service allows users to interact with powerful text models hosted on the platform.
+Scaleway's Generative APIs service allows users to interact with powerful language models hosted on the platform.
-There are several ways to interact with text models:
-- The Scaleway [console](https://console.scaleway.com) will soon provide a complete [playground](/ai-data/generative-apis/how-to/query-text-models/#accessing-the-playground), aiming to test models, adapt parameters, and observe how these changes affect the output in real-time.
-- Via the [Chat API](/ai-data/generative-apis/how-to/query-text-models/#querying-text-models-via-api)
+There are several ways to interact with language models:
+- The Scaleway [console](https://console.scaleway.com) provides complete [playground](/ai-data/generative-apis/how-to/query-language-models/#accessing-the-playground), aiming to test models, adapt parameters, and observe how these changes affect the output in real-time.
+- Via the [Chat API](/ai-data/generative-apis/how-to/query-language-models/#querying-language-models-via-api)
-- Access to this service is restricted while in beta. You can request access to the product by filling out a form on Scaleway's [betas page](https://www.scaleway.com/en/betas/#generative-apis).
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/) for API authentication
@@ -27,9 +26,20 @@ There are several ways to interact with text models:
## Accessing the Playground
-Scaleway's Playground is in development, stay tuned!
+Scaleway provides a web playground for instruct-based models hosted on Generative APIs.
-## Querying text models via API
+1. Navigate to Generative APIs under the AI section of the [Scaleway console](https://console.scaleway.com/) side menu. The list of models you can query displays.
+2. Click the name of the chat model you want to try. Alternatively, click next to the chat model, and click **Try model** in the menu.
+
+The web playground displays.
+
+## Using the Playground
+1. Enter a prompt at the bottom of the page, or use one of the suggested prompts in the conversation area.
+2. Edit the hyperparameters listed on the right column, for example the default temperature for more or less randomness on the outputs.
+3. Switch model at the top of the page, to observe the capabilities of chat models offered via Generative APIs.
+4. Click **View code** to get code snippets configured according to your settings in the playground.
+
+## Querying language models via API
The [Chat API](/ai-data/generative-apis/api-cli/using-chat-api/) is an OpenAI-compatible REST API for generating and manipulating conversations.
diff --git a/ai-data/generative-apis/how-to/query-vision-models.mdx b/ai-data/generative-apis/how-to/query-vision-models.mdx
new file mode 100644
index 0000000000..6760c1c5ca
--- /dev/null
+++ b/ai-data/generative-apis/how-to/query-vision-models.mdx
@@ -0,0 +1,238 @@
+---
+meta:
+ title: How to query vision models
+ description: Learn how to interact with powerful vision models using Scaleway's Generative APIs service.
+content:
+ h1: How to query vision models
+ paragraph: Learn how to interact with powerful vision models using Scaleway's Generative APIs service.
+tags: generative-apis ai-data vision-models
+dates:
+ validation: 2024-09-30
+ posted: 2024-09-30
+---
+
+Scaleway's Generative APIs service allows users to interact with powerful vision models hosted on the platform.
+
+
+ Vision models can understand and analyze images, not generate them.
+
+
+There are several ways to interact with vision models:
+- The Scaleway [console](https://console.scaleway.com) provides complete [playground](/ai-data/generative-apis/how-to/query-vision-models/#accessing-the-playground), aiming to test models, adapt parameters, and observe how these changes affect the output in real-time.
+- Via the [Chat API](/ai-data/generative-apis/how-to/query-vision-models/#querying-vision-models-via-api)
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/) for API authentication
+- Python 3.7+ installed on your system
+
+## Accessing the Playground
+
+Scaleway provides a web playground for vision models hosted on Generative APIs.
+
+1. Navigate to Generative APIs under the AI section of the [Scaleway console](https://console.scaleway.com/) side menu. The list of models you can query displays.
+2. Click the name of the vision model you want to try. Alternatively, click next to the vision model, and click **Try model** in the menu.
+
+The web playground displays.
+
+## Using the Playground
+1. Upload one or multiple images to the prompt area at the bottom of the page. Enter a prompt, for example, to describe the image(s) you attached.
+2. Edit the hyperparameters listed on the right column, for example the default temperature for more or less randomness on the outputs.
+3. Switch model at the top of the page, to observe the capabilities of chat and vision models offered via Generative APIs.
+4. Click **View code** to get code snippets configured according to your settings in the playground.
+
+## Querying vision models via API
+
+The [Chat API](/ai-data/generative-apis/api-cli/using-chat-api/) is an OpenAI-compatible REST API for generating and manipulating conversations.
+
+You can query the vision models programmatically using your favorite tools or languages.
+Vision models take both text and images as inputs.
+
+
+ Unlike traditional language models, vision models will take a content array for the user role, structuring text and images as inputs.
+
+
+In the following example, we will use the OpenAI Python client.
+
+### Installing the OpenAI SDK
+
+Install the OpenAI SDK using pip:
+
+```bash
+pip install openai
+```
+
+### Initializing the client
+
+Initialize the OpenAI client with your base URL and API key:
+
+```python
+from openai import OpenAI
+
+# Initialize the client with your base URL and API key
+client = OpenAI(
+ base_url="https://api.scaleway.ai/v1", # Scaleway's Generative APIs service URL
+ api_key="" # Your unique API secret key from Scaleway
+)
+```
+
+### Generating a chat completion
+
+You can now create a chat completion, for example with the `pixtral-12b-2409` model:
+
+```python
+# Create a chat completion using the 'pixtral-12b-2409' model
+response = client.chat.completions.create(
+ model="pixtral-12b-2409",
+ messages=[
+ {
+ "role": "user",
+ "content": [
+ {"type": "text", "text": "What is this image?"},
+ {"type": "image_url", "image_url": {"url": "https://picsum.photos/id/32/512/512"}},
+ ] # Vision models will take a content array with text and image_url objects.
+
+ }
+ ],
+ temperature=0.7, # Adjusts creativity
+ max_tokens=2048, # Limits the length of the output
+ top_p=0.9 # Controls diversity through nucleus sampling. You usually only need to use temperature.
+)
+
+# Print the generated response
+print(response.choices[0].message.content)
+```
+
+This code sends messages, prompt and image, to the vision model and returns an answer based on your input. The `temperature`, `max_tokens`, and `top_p` parameters control the response's creativity, length, and diversity, respectively.
+
+A conversation style may include a default system prompt. You may set this prompt by setting the first message with the role system. For example:
+
+```python
+[
+ {
+ "role": "system",
+ "content": "You are Xavier Niel."
+ }
+]
+```
+
+### Passing images to Pixtral
+
+1. **Image URLs**: If the image is available online, you can just include the image URL in your request as demonstrated above. This approach is simple and does not require any encoding.
+2. **Base64 encoded**: image Base64 encoding is a standard way to transform binary data, like images, into a text format, making it easier to transmit over the internet.
+
+The following Python code sample shows you how to encode an image in base64 format and pass it to your request payload.
+
+```python
+import base64
+from io import BytesIO
+from PIL import Image
+
+def encode_image(img):
+ buffered = BytesIO()
+ img.save(buffered, format="JPEG")
+ encoded_string = base64.b64encode(buffered.getvalue()).decode("utf-8")
+ return encoded_string
+
+img = Image.open("path_to_your_image.jpg")
+base64_img = encode_image(img)
+
+payload = {
+ "messages": [
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "What is this image?"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": f"data:image/jpeg;base64,{base64_img}"
+ }
+ }
+ ]
+ }
+ ],
+ ... # other parameters
+}
+
+```
+
+### Model parameters and their effects
+
+The following parameters will influence the output of the model:
+
+- **`messages`**: A list of message objects that represent the conversation history. Each message should have a `role` (e.g., "system", "user", "assistant") and `content`. The content is an array that can contain text and/or image objects.
+- **`temperature`**: Controls the output's randomness. Lower values (e.g., 0.2) make the output more deterministic, while higher values (e.g., 0.8) make it more creative.
+- **`max_tokens`**: The maximum number of tokens (words or parts of words) in the generated output.
+- **`top_p`**: Recommended for advanced use cases only. You usually only need to use temperature. `top_p` controls the diversity of the output, using nucleus sampling, where the model considers the tokens with top probabilities until the cumulative probability reaches `top_p`.
+- **`stop`**: A string or list of strings where the model will stop generating further tokens. This is useful for controlling the end of the output.
+
+
+ If you encounter an error such as "Forbidden 403" refer to the [API documentation](/ai-data/generative-apis/api-cli/understanding-errors) for troubleshooting tips.
+
+
+## Streaming
+
+By default, the outputs are returned to the client only after the generation process is complete. However, a common alternative is to stream the results back to the client as they are generated. This is particularly useful in chat applications, where it allows the client to view the results incrementally as each token is produced.
+Following is an example using the chat completions API:
+
+```python
+from openai import OpenAI
+
+client = OpenAI(
+ base_url="https://api.scaleway.ai/v1", # Scaleway's Generative APIs service URL
+ api_key="" # Your unique API key from Scaleway
+)
+response = client.chat.completions.create(
+ model="pixtral-12b-2409",
+ messages=[{
+ "role": "user",
+ "content": [
+ {"type": "text", "text": "What is this image?"},
+ {"type": "image_url", "image_url": {"url": "https://picsum.photos/id/32/512/512"}},
+ ]
+ }],
+ stream=True,
+)
+
+for chunk in response:
+ if chunk.choices[0].delta.content:
+ print(chunk.choices[0].delta.content, end="")
+```
+
+## Async
+
+The service also supports asynchronous mode for any chat completion.
+
+```python
+
+import asyncio
+from openai import AsyncOpenAI
+
+client = AsyncOpenAI(
+ base_url="https://api.scaleway.ai/v1", # Scaleway's Generative APIs service URL
+ api_key="" # Your unique API key from Scaleway
+)
+
+async def main():
+ stream = await client.chat.completions.create(
+ model="pixtral-12b-2409",
+ messages=[{
+ "role": "user",
+ "content": [
+ {"type": "text", "text": "What is this image?"},
+ {"type": "image_url", "image_url": {"url": "https://picsum.photos/id/32/512/512"}},
+ ]
+ }],
+ stream=True,
+ )
+ async for chunk in stream:
+ print(chunk.choices[0].delta.content, end="")
+
+asyncio.run(main())
+```
diff --git a/ai-data/generative-apis/how-to/use-function-calling.mdx b/ai-data/generative-apis/how-to/use-function-calling.mdx
new file mode 100644
index 0000000000..7c817d3126
--- /dev/null
+++ b/ai-data/generative-apis/how-to/use-function-calling.mdx
@@ -0,0 +1,331 @@
+---
+meta:
+ title: How to use function calling
+ description: Learn how to implement function calling capabilities using Scaleway's Chat Completions API service.
+content:
+ h1: How to use function calling
+ paragraph: Learn how to enhance AI interactions by integrating external tools and functions using Scaleway's Chat Completions API service.
+tags: chat-completions-api
+dates:
+ validation: 2024-09-24
+ posted: 2024-09-24
+---
+
+Scaleway's Chat Completions API supports function calling as introduced by OpenAI.
+
+## What is function calling?
+
+Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the tool call to be done as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results.
+
+
+
+- Access to Generative APIs.
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/) for API authentication
+- Python 3.7+ installed on your system
+
+## Supported models
+
+* llama-3.1-8b-instruct
+* llama-3.1-70b-instruct
+* mistral-nemo-instruct-2407
+
+## Understanding function calling
+
+Function calling consists of three main components:
+- **Tool definitions**: JSON schemas that describe available functions and their parameters
+- **Tool selection**: Automatic or manual selection of appropriate functions based on user queries
+- **Tool execution**: Processing function calls and handling their responses
+
+The workflow typically follows these steps:
+1. Define available tools using JSON schema
+2. Send system and user query along with tool definitions
+3. Process model's function selection
+4. Execute selected functions
+5. Return results to model for final response
+
+## Code examples
+
+
+ Before diving into the code examples, ensure you have the necessary libraries installed:
+ ```bash
+ pip install openai
+ ```
+
+
+We will demonstrate function calling using a flight scheduling system that allows users to check available flights between European airports.
+
+### Basic function definition
+
+First, let's define our flight schedule function and its schema:
+
+```python
+from openai import OpenAI
+import json
+
+def get_flight_schedule(departure_airport: str, destination_airport: str, departure_date: str) -> dict:
+ """
+ Retrieves flight schedules between two European airports on a specific date.
+ """
+ # Mock flight schedule data
+ flights = {
+ "CDG-LHR-2024-11-01": [
+ {"flight_number": "AF123", "airline": "Air France", "departure_time": "08:00", "arrival_time": "09:00"},
+ {"flight_number": "BA456", "airline": "British Airways", "departure_time": "10:00", "arrival_time": "11:00"},
+ {"flight_number": "LH789", "airline": "Lufthansa", "departure_time": "14:00", "arrival_time": "15:00"}
+ ],
+ "AMS-MUC-2024-11-01": [
+ {"flight_number": "KL101", "airline": "KLM", "departure_time": "07:30", "arrival_time": "09:00"},
+ {"flight_number": "LH202", "airline": "Lufthansa", "departure_time": "12:00", "arrival_time": "13:30"}
+ ]
+ }
+
+ key = f"{departure_airport}-{destination_airport}-{departure_date}"
+ return flights.get(key, {"error": "No flights found for this route and date."})
+
+# Define the tool specification
+tools = [{
+ "type": "function",
+ "function": {
+ "name": "get_flight_schedule",
+ "description": "Get available flights between two European airports on a specific date",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "departure_airport": {
+ "type": "string",
+ "description": "The IATA code of the departure airport (e.g., CDG, LHR)"
+ },
+ "destination_airport": {
+ "type": "string",
+ "description": "The IATA code of the destination airport"
+ },
+ "departure_date": {
+ "type": "string",
+ "description": "The date of departure in YYYY-MM-DD format"
+ }
+ },
+ "required": ["departure_airport", "destination_airport", "departure_date"]
+ }
+ }
+}]
+```
+
+### Simple function call example
+
+Here is how to implement a basic function call:
+
+```python
+# Initialize the OpenAI client
+client = OpenAI(
+ base_url="https://api.scaleway.ai/v1",
+ api_key=""
+)
+
+# Create a simple query
+messages = [
+ {
+ "role": "system",
+ "content": "You are a helpful flight assistant."
+ },
+ {
+ "role": "user",
+ "content": "What flights are available from CDG to LHR on November 1st, 2024?"
+ }
+]
+
+# Make the API call
+response = client.chat.completions.create(
+ model="llama-3.1-70b-instruct",
+ messages=messages,
+ tools=tools,
+ tool_choice="auto"
+)
+```
+
+
+ The model automatically decides which functions to call. However, you can specify a particular function by using the `tool_choice` parameter. In the example above, you can replace `tool_choice=auto` with `tool_choice={"type": "function", "function": {"name": "get_flight_schedule"}}` to explicitly call the desired function.
+
+
+### Multi-turn conversation handling
+
+For more complex interactions, you will need to handle multiple turns of conversation:
+
+```python
+# Process the tool call
+if response.choices[0].message.tool_calls:
+ tool_call = response.choices[0].message.tool_calls[0]
+
+ # Execute the function
+ if tool_call.function.name == "get_flight_schedule":
+ function_args = json.loads(tool_call.function.arguments)
+ function_response = get_flight_schedule(**function_args)
+
+ # Add results to the conversation
+ messages.extend([
+ {
+ "role": "assistant",
+ "content": None,
+ "tool_calls": [tool_call]
+ },
+ {
+ "role": "tool",
+ "name": tool_call.function.name,
+ "content": json.dumps(function_response),
+ "tool_call_id": tool_call.id
+ }
+ ])
+
+ # Get final response
+ final_response = client.chat.completions.create(
+ model="llama-3.1-70b-instruct",
+ messages=messages
+ )
+ print(final_response.choices[0].message.content)
+```
+
+### Parallel function calling
+
+
+ Meta models do not support parallel tool calls.
+
+
+In addition to one function call described above, you can also call multiple functions in a single turn.
+This section shows an example for how you can use parallel function calling.
+
+Define the tools:
+
+```
+def open_floor_space(floor_number: int) -> bool:
+ """Opens up the specified floor for party space by unlocking doors and moving furniture."""
+ print(f"Floor {floor_number} is now open party space!")
+ return True
+
+def set_lobby_vibe(party_mode: bool) -> str:
+ """Switches lobby screens and lighting to party mode."""
+ status = "party mode activated!" if party_mode else "back to business mode"
+ print(f"Lobby is now in {status}")
+ return "The lobby is ready to party!"
+
+def prep_snack_station(activate: bool) -> bool:
+ """Converts the cafeteria into a snack and drink station."""
+ print(f"Snack station is {'open and stocked!' if activate else 'closed.'}")
+ return True
+```
+
+Define the specifications:
+
+```
+tools = [
+ {
+ "type": "function",
+ "function": {
+ "name": "open_floor_space",
+ "description": "Opens up an entire floor for the party",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "floor_number": {
+ "type": "integer",
+ "description": "Which floor to open up"
+ }
+ },
+ "required": ["floor_number"]
+ }
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "set_lobby_vibe",
+ "description": "Transform lobby atmosphere into party mode",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "party_mode": {
+ "type": "boolean",
+ "description": "True for party, False for business"
+ }
+ },
+ "required": ["party_mode"]
+ }
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "prep_snack_station",
+ "description": "Set up the snack and drink station",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "activate": {
+ "type": "boolean",
+ "description": "True to open, False to close"
+ }
+ },
+ "required": ["activate"]
+ }
+ }
+ }
+]
+```
+
+Next, call the model with proper instructions
+
+```
+system_prompt = """
+You are an office party control assistant. When asked to transform the office into a party space, you should:
+1. Open up a floor for the party
+2. Transform the lobby into party mode
+3. Set up the snack station
+Make all these changes at once for an instant office party!
+"""
+
+messages = [
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": "Turn this office building into a party!"}
+]
+```
+
+## Best practices
+
+When implementing function calling, follow these guidelines for optimal results:
+
+1. **Function design**
+ - Keep function names clear and descriptive
+ - Limit the number of functions to 7 or fewer per conversation
+ - Use detailed parameter descriptions in your JSON schema
+
+2. **Parameter handling**
+ - Always specify required parameters
+ - Use appropriate data types and validation
+ - Include example values in parameter descriptions
+
+3. **Error handling**
+ - Implement robust error handling for function execution
+ - Return clear error messages that the model can interpret
+ - Handle edge cases gracefully
+
+4. **Performance optimization**
+ - Set appropriate temperature values (lower for more precise function calls)
+ - Cache frequently accessed data when possible
+ - Minimize the number of turns in multi-turn conversations
+
+
+ For production applications, always implement proper error handling and input validation. The examples above focus on the happy path for clarity.
+
+
+## Further resources
+
+For more information about function calling and advanced implementations, refer to these resources:
+
+- [OpenAI Function Calling Guide](https://platform.openai.com/docs/guides/function-calling)
+- [JSON Schema Specification](https://json-schema.org/specification)
+- [Chat Completions API Reference](/ai-data/generative-apis/api-cli/using-chat-api/)
+
+Function calling significantly extends the capabilities of language models by allowing them to interact with external tools and APIs.
+
+
+ We can't wait to see what you will build with function calls. Tell us what you are up to, share your experiments on Scaleway's [Slack community](https://slack.scaleway.com/) #ai
+
\ No newline at end of file
diff --git a/ai-data/generative-apis/how-to/use-structured-outputs.mdx b/ai-data/generative-apis/how-to/use-structured-outputs.mdx
index f2c7b93632..5d69bb81d3 100644
--- a/ai-data/generative-apis/how-to/use-structured-outputs.mdx
+++ b/ai-data/generative-apis/how-to/use-structured-outputs.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: How to use structured outputs
paragraph: Learn how to interact with powerful text models using Scaleway's Chat Completions API service.
-tags: chat-completitions-api
+tags: chat-completions-api
dates:
validation: 2024-09-17
posted: 2024-09-17
@@ -215,7 +215,7 @@ extract = client.chat.completions.create(
"items": {"type": "string"}
}
},
- "additionalProperties": false,
+ "additionalProperties": False,
"required": ["title", "summary", "actionItems"]
}
}
diff --git a/ai-data/generative-apis/quickstart.mdx b/ai-data/generative-apis/quickstart.mdx
index 0eabad417b..ec0ec90434 100644
--- a/ai-data/generative-apis/quickstart.mdx
+++ b/ai-data/generative-apis/quickstart.mdx
@@ -32,7 +32,19 @@ Hosted in European data centers and priced competitively per million tokens used
## Start with the Generative APIs Playground
-Scaleway's Playground is in development, stay tuned!
+Scaleway provides a web playground for instruct-based models hosted on Generative APIs.
+
+### Accessing the Playground
+1. Navigate to Generative APIs under the AI section of the [Scaleway console](https://console.scaleway.com/) side menu. The list of models you can query displays.
+2. Click the name of the chat model you want to try. Alternatively, click next to the chat model, and click **Try model** in the menu.
+
+The web playground displays.
+
+### Using the Playground
+1. Enter a prompt at the bottom of the page, or use one of the suggested prompts in the conversation area.
+2. Edit the hyperparameters listed on the right column, for example the default temperature for more or less randomness on the outputs.
+3. Switch model at the top of the page, to observe the capabilities of chat models offered via Generative APIs.
+4. Click **View code** to get code snippets configured according to your settings in the playground.
## Install the OpenAI Python SDK
diff --git a/ai-data/generative-apis/reference-content/rate-limits.mdx b/ai-data/generative-apis/reference-content/rate-limits.mdx
index 78839ecbd8..3f18ff0c28 100644
--- a/ai-data/generative-apis/reference-content/rate-limits.mdx
+++ b/ai-data/generative-apis/reference-content/rate-limits.mdx
@@ -13,16 +13,31 @@ dates:
## What are the limits?
-
- This service has no rate limits while in closed beta. Limits will be set at a later stage.
-
-
-Any given model served through Scaleway Generative APIs will ultimately get limited by:
+Any model served through Scaleway Generative APIs gets limited by:
- Tokens per minute
-- Queries per second
+- Queries per minute
+
+### Chat models
+
+| Model string | Requests per minute | Tokens per minute |
+|-----------------|-----------------|-----------------|
+| `llama-3.1-8b-instruct` | 300 | 100K |
+| `llama-3.1-70b-instruct` | 300 | 100K |
+| `mistral-nemo-instruct-2407`| 300 | 100K |
+| `pixtral-12b-2409`| 300 | 100K |
-We welcome feedback from early testers to set proper rates according to future use.
+### Embedding models
+
+| Model string | Requests per minute | Tokens per minute |
+|-----------------|-----------------|-----------------|
+| `sentence-t5-xxl` | 600 | 1M |
+| `bge-multilingual-gemma2` | 600 | 1M |
## Why do we set rate limits?
-These limits will safeguard against abuse or misuse of Scaleway Generative APIs, helping to ensure fair access to the API with consistent performance.
\ No newline at end of file
+These limits safeguard against abuse or misuse of Scaleway Generative APIs, helping to ensure fair access to the API with consistent performance.
+
+## How can I increase the rate limits?
+
+We actively monitor usage and will improve rates based on feedback.
+If you need to increase your rate limits, please contact us via the support, providing details on the model used and specific use case.
\ No newline at end of file
diff --git a/ai-data/generative-apis/reference-content/supported-models.mdx b/ai-data/generative-apis/reference-content/supported-models.mdx
index 9975b24969..f35a361ea7 100644
--- a/ai-data/generative-apis/reference-content/supported-models.mdx
+++ b/ai-data/generative-apis/reference-content/supported-models.mdx
@@ -21,8 +21,11 @@ Our [Chat API](/ai-data/generative-apis/how-to/query-text-models) has built-in s
| Provider | Model string | Context window | License | Model card |
|-----------------|-----------------|-----------------|-----------------|-----------------|
-| Meta | `llama-3.1-8b-instruct` | 128k | [Llama 3.1 Community License Agreement](https://llama.meta.com/llama3_1/license/) | [HF](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) |
+| Meta | `llama-3.1-8b-instruct` | 128k | [Llama 3.1 Community License Agreement](https://llama.meta.com/llama3_1/license/) | [HF](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) |
+| Meta | `llama-3.1-70b-instruct` | 128k | [Llama 3.1 Community License Agreement](https://llama.meta.com/llama3_1/license/) | [HF](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) |
| Mistral | `mistral-nemo-instruct-2407` | 128k | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) |
+| Mistral | `pixtral-12b-2409` | 128k | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/mistralai/Pixtral-12B-2409) |
+
If you are unsure which chat model to use, we currently recommend Llama 3.1 8B Instruct (`llama-3.1-8b-instruct`) to get started.
@@ -39,6 +42,7 @@ Our [Embeddings API](/ai-data/generative-apis/how-to/query-embedding-models) pro
| Provider | Model string | Model size | Embedding dimension | Context window | License | Model card |
|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| SBERT | `sentence-t5-xxl` | 5B | 768 | 512 | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | [HF](https://huggingface.co/sentence-transformers/sentence-t5-xxl) |
+| BAAI | `bge-multilingual-gemma2` | 9B | 3584 | 4096 | [Gemma](https://ai.google.dev/gemma/terms) | [HF](https://huggingface.co/BAAI/bge-multilingual-gemma2) |
## Request a model
@@ -46,4 +50,4 @@ Our [Embeddings API](/ai-data/generative-apis/how-to/query-embedding-models) pro
## Deprecated models
-This section will list models retired and no longer accessible for use. All models are currently in `Active` status.
\ No newline at end of file
+This section will list models retired and no longer accessible for use. All models are currently in `Active` status.
diff --git a/ai-data/managed-inference/concepts.mdx b/ai-data/managed-inference/concepts.mdx
index 4745322f9c..5b84c499c3 100644
--- a/ai-data/managed-inference/concepts.mdx
+++ b/ai-data/managed-inference/concepts.mdx
@@ -42,6 +42,10 @@ Fine-tuning involves further training a pre-trained language model on domain-spe
Few-shot prompting uses the power of language models to generate responses with minimal input, relying on just a handful of examples or prompts.
It demonstrates the model's ability to generalize from limited training data to produce coherent and contextually relevant outputs.
+## Function calling
+
+Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format.
+
## Hallucinations
Hallucinations in LLMs refer to instances where generative AI models generate responses that, while grammatically coherent, contain inaccuracies or nonsensical information. These inaccuracies are termed "hallucinations" because the models create false or misleading content. Hallucinations can occur because of constraints in the training data, biases embedded within the models, or the complex nature of language itself.
diff --git a/ai-data/managed-inference/how-to/managed-inference-with-private-network.mdx b/ai-data/managed-inference/how-to/managed-inference-with-private-network.mdx
index 8979e9e81d..ca4f2c65da 100644
--- a/ai-data/managed-inference/how-to/managed-inference-with-private-network.mdx
+++ b/ai-data/managed-inference/how-to/managed-inference-with-private-network.mdx
@@ -91,7 +91,7 @@ Using a Private Network for communications between your Instances hosting your a
import requests
PAYLOAD = {
- "model": "", # EXAMPLE= meta/llama-3-8b-instruct:bf16
+ "model": "", # EXAMPLE= meta/llama-3.1-8b-instruct:fp8
"messages": [
{"role": "system",
"content": "You are a helpful, respectful and honest assistant."},
diff --git a/ai-data/managed-inference/reference-content/function-calling-support.mdx b/ai-data/managed-inference/reference-content/function-calling-support.mdx
new file mode 100644
index 0000000000..19351f3ddb
--- /dev/null
+++ b/ai-data/managed-inference/reference-content/function-calling-support.mdx
@@ -0,0 +1,49 @@
+---
+meta:
+ title: Support for function calling in Scaleway Managed Inference
+ description: Function calling allows models to connect to external tools.
+content:
+ h1: Support for function calling in Scaleway Managed Inference
+ paragraph: Function calling allows models to connect to external tools.
+tags:
+categories:
+ - ai-data
+---
+
+## What is function calling?
+
+Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results.
+
+## How to implement function calling in Scaleway Managed Inference?
+
+[This tutorial](/tutorials/building-ai-application-function-calling/) will guide you through the steps of creating a simple flight schedule assistant that can understand natural language queries about flights and return structured information.
+
+## What are models with function calling capabilities?
+
+The following models in Scaleway's Managed Inference library can call tools as per the OpenAI method:
+
+* meta/llama-3.1-8b-instruct
+* meta/llama-3.1-70b-instruct
+* mistral/mistral-7b-instruct-v0.3
+* mistral/mistral-nemo-instruct-2407
+
+## Understanding function calling
+
+Function calling consists of three main components:
+- **Tool definitions**: JSON schemas that describe available functions and their parameters
+- **Tool selection**: Automatic or manual selection of appropriate functions based on user queries
+- **Tool execution**: Processing function calls and handling their responses
+
+The workflow typically follows these steps:
+1. Define available tools using JSON schema
+2. Send system and user query along with tool definitions
+3. Process model's function selection
+4. Execute selected functions
+5. Return results to model for final response
+
+## Further resources
+
+For more information about function calling and advanced implementations, refer to these resources:
+
+- [OpenAI Function Calling Guide](https://platform.openai.com/docs/guides/function-calling)
+- [JSON Schema Specification](https://json-schema.org/specification)
diff --git a/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx b/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx
index 909171c226..543a8e0158 100644
--- a/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx
+++ b/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx
@@ -17,7 +17,6 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Meta](https://llama.meta.com/llama3/) |
-| Model Name | `llama-3-70b-instruct` |
| Compatible Instances | H100 (FP8) |
| Context size | 8192 tokens |
@@ -62,7 +61,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"llama-3-70b-instruct", "messages":[{"role": "user","content": "Sing me a song about Xavier Niel"}], "max_tokens": 500, "top_p": 1, "temperature": 0.7, "stream": false}'
+--data '{"model":"meta/llama-3-70b-instruct:fp8", "messages":[{"role": "user","content": "Sing me a song about Xavier Niel"}], "max_tokens": 500, "top_p": 1, "temperature": 0.7, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/llama-3-8b-instruct.mdx b/ai-data/managed-inference/reference-content/llama-3-8b-instruct.mdx
index cd23ecf682..6970e05524 100644
--- a/ai-data/managed-inference/reference-content/llama-3-8b-instruct.mdx
+++ b/ai-data/managed-inference/reference-content/llama-3-8b-instruct.mdx
@@ -17,7 +17,6 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Meta](https://llama.meta.com/llama3/) |
-| Model Name | `llama-3-8b-instruct` |
| Compatible Instances | L4, H100 (FP8, BF16) |
| Context size | 8192 tokens |
@@ -66,7 +65,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"llama-3-8b-instruct", "messages":[{"role": "user","content": "There is a llama in my garden, what should I do?"}], "max_tokens": 500, "top_p": 1, "temperature": 0.7, "stream": false}'
+--data '{"model":"meta/llama-3-8b-instruct:fp8", "messages":[{"role": "user","content": "There is a llama in my garden, what should I do?"}], "max_tokens": 500, "top_p": 1, "temperature": 0.7, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/llama-3.1-70b-instruct.mdx b/ai-data/managed-inference/reference-content/llama-3.1-70b-instruct.mdx
index 26266b795c..eb6695e46b 100644
--- a/ai-data/managed-inference/reference-content/llama-3.1-70b-instruct.mdx
+++ b/ai-data/managed-inference/reference-content/llama-3.1-70b-instruct.mdx
@@ -17,8 +17,7 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Meta](https://llama.meta.com/llama3/) |
-| License | [Llama 3.1 community](https://llama.meta.com/llama3_1/license/) |
-| Model Name | `llama-3.1-70b-instruct` |
+| License | [Llama 3.1 community](https://llama.meta.com/llama3_1/license/) | |
| Compatible Instances | H100 (FP8), H100-2 (FP8, BF16) |
| Context Length | up to 128k tokens |
@@ -61,7 +60,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"llama-3.1-70b-instruct", "messages":[{"role": "user","content": "There is a llama in my garden, what should I do?"}], "max_tokens": 500, "temperature": 0.7, "stream": false}'
+--data '{"model":"meta/llama-3.1-70b-instruct:fp8", "messages":[{"role": "user","content": "There is a llama in my garden, what should I do?"}], "max_tokens": 500, "temperature": 0.7, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/llama-3.1-8b-instruct.mdx b/ai-data/managed-inference/reference-content/llama-3.1-8b-instruct.mdx
index c7f943c185..1e45a0dcdb 100644
--- a/ai-data/managed-inference/reference-content/llama-3.1-8b-instruct.mdx
+++ b/ai-data/managed-inference/reference-content/llama-3.1-8b-instruct.mdx
@@ -18,7 +18,6 @@ categories:
|-----------------|------------------------------------|
| Provider | [Meta](https://llama.meta.com/llama3/) |
| License | [Llama 3.1 community](https://llama.meta.com/llama3_1/license/) |
-| Model Name | `llama-3.1-8b-instruct` |
| Compatible Instances | L4, H100, H100-2 (FP8, BF16) |
| Context Length | up to 128k tokens |
@@ -62,7 +61,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"llama-3.1-8b-instruct", "messages":[{"role": "user","content": "There is a llama in my garden, what should I do?"}], "max_tokens": 500, "temperature": 0.7, "stream": false}'
+--data '{"model":"meta/llama-3.1-8b-instruct:fp8", "messages":[{"role": "user","content": "There is a llama in my garden, what should I do?"}], "max_tokens": 500, "temperature": 0.7, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/mistral-7b-instruct-v0.3.mdx b/ai-data/managed-inference/reference-content/mistral-7b-instruct-v0.3.mdx
index 3b448ef6c6..f4ff7ba4a6 100644
--- a/ai-data/managed-inference/reference-content/mistral-7b-instruct-v0.3.mdx
+++ b/ai-data/managed-inference/reference-content/mistral-7b-instruct-v0.3.mdx
@@ -17,14 +17,13 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Mistral](https://mistral.ai/technology/#models) |
-| Model Name | `mistral-7b-instruct-v0.3` |
| Compatible Instances | L4 (BF16) |
| Context size | 32K tokens |
## Model name
```bash
-mistral-7b-instruct-v0.3:bf16
+mistral/mistral-7b-instruct-v0.3:bf16
```
## Compatible Instances
@@ -55,7 +54,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"mistral-7b-instruct-v0.3", "messages":[{"role": "user","content": "Explain Public Cloud in a nutshell."}], "top_p": 1, "temperature": 0.7, "stream": false}'
+--data '{"model":"mistral/mistral-7b-instruct-v0.3:bf16", "messages":[{"role": "user","content": "Explain Public Cloud in a nutshell."}], "top_p": 1, "temperature": 0.7, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/mistral-nemo-instruct-2407.mdx b/ai-data/managed-inference/reference-content/mistral-nemo-instruct-2407.mdx
index 7662863fb3..83c5472988 100644
--- a/ai-data/managed-inference/reference-content/mistral-nemo-instruct-2407.mdx
+++ b/ai-data/managed-inference/reference-content/mistral-nemo-instruct-2407.mdx
@@ -17,14 +17,13 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Mistral](https://mistral.ai/technology/#models) |
-| Model Name | `mistral-nemo-instruct-2407` |
| Compatible Instances | H100 (FP8) |
| Context size | 128K tokens |
## Model name
```bash
-mistral-nemo-instruct-2407:fp8
+mistral/mistral-nemo-instruct-2407:fp8
```
## Compatible Instances
@@ -61,7 +60,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"mistral-nemo-instruct-2407", "messages":[{"role": "user","content": "Sing me a song about Xavier Niel"}], "top_p": 1, "temperature": 0.35, "stream": false}'
+--data '{"model":"mistral/mistral-nemo-instruct-2407:fp8", "messages":[{"role": "user","content": "Sing me a song about Xavier Niel"}], "top_p": 1, "temperature": 0.35, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/mixtral-8x7b-instruct-v0.1.mdx b/ai-data/managed-inference/reference-content/mixtral-8x7b-instruct-v0.1.mdx
index f48a258257..4f5fc07f5f 100644
--- a/ai-data/managed-inference/reference-content/mixtral-8x7b-instruct-v0.1.mdx
+++ b/ai-data/managed-inference/reference-content/mixtral-8x7b-instruct-v0.1.mdx
@@ -17,7 +17,6 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Mistral](https://mistral.ai/technology/#models) |
-| Model Name | `mixtral-8x7b-instruct-v0.1` |
| Compatible Instances | H100 (FP8) - H100-2 (FP16) |
| Context size | 32k tokens |
@@ -57,7 +56,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"mixtral-8x7b-instruct-v0.1", "messages":[{"role": "user","content": "Sing me a song about Scaleway"}], "max_tokens": 200, "top_p": 1, "temperature": 1, "stream": false}'
+--data '{"model":"mistral/mixtral-8x7b-instruct-v0.1:fp8", "messages":[{"role": "user","content": "Sing me a song about Scaleway"}], "max_tokens": 200, "top_p": 1, "temperature": 1, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/ai-data/managed-inference/reference-content/openai-compatibility.mdx b/ai-data/managed-inference/reference-content/openai-compatibility.mdx
index cadd000eb4..ff98d0a9de 100644
--- a/ai-data/managed-inference/reference-content/openai-compatibility.mdx
+++ b/ai-data/managed-inference/reference-content/openai-compatibility.mdx
@@ -48,7 +48,7 @@ chat_completion = client.chat.completions.create(
"content": "Sing me a song about Scaleway"
}
],
- model='' #e.g 'llama-3-8b-instruct'
+ model='' #e.g 'meta/llama-3.1-8b-instruct:fp8'
)
print(chat_completion.choices[0].message.content)
@@ -71,6 +71,8 @@ print(chat_completion.choices[0].message.content)
- `stop`
- `seed`
- `stream`
+- `tools`
+- `tool_choice`
### Unsupported parameters
@@ -79,8 +81,6 @@ Currently, the following options are not supported:
- `frequency_penalty`
- `n`
- `top_logprobs`
-- `tools`
-- `tool_choice`
- `logit_bias`
- `user`
diff --git a/ai-data/managed-inference/reference-content/pixtral-12b-2409.mdx b/ai-data/managed-inference/reference-content/pixtral-12b-2409.mdx
index fd7c14bda1..c8193c38c4 100644
--- a/ai-data/managed-inference/reference-content/pixtral-12b-2409.mdx
+++ b/ai-data/managed-inference/reference-content/pixtral-12b-2409.mdx
@@ -17,7 +17,6 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [Mistral](https://mistral.ai/technology/#models) |
-| Model Name | `pixtral-12b-2409` |
| Compatible Instances | H100, H100-2 (bf16) |
| Context size | 128k tokens |
diff --git a/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx b/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx
index 79015fba5e..c9aefbb111 100644
--- a/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx
+++ b/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx
@@ -15,11 +15,10 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
| Provider | [sentence-transformers](https://www.sbert.net/) |
-| Model Name | `sentence-t5-xxl` |
| Compatible Instances | L4 (FP32) |
| Context size | 512 tokens |
-## Model names
+## Model name
```bash
sentence-transformers/sentence-t5-xxl:fp32
diff --git a/ai-data/managed-inference/reference-content/wizardlm-70b-v1.0.mdx b/ai-data/managed-inference/reference-content/wizardlm-70b-v1.0.mdx
index d9957b5e48..86f087c4de 100644
--- a/ai-data/managed-inference/reference-content/wizardlm-70b-v1.0.mdx
+++ b/ai-data/managed-inference/reference-content/wizardlm-70b-v1.0.mdx
@@ -16,8 +16,7 @@ categories:
| Attribute | Details |
|-----------------|------------------------------------|
-| Provider | [WizardLM](https://wizardlm.github.io/) |
-| Model Name | `wizardlm-70B-V1.0` |
+| Provider | [WizardLM](https://wizardlm.github.io/WizardLM2/) |
| Compatible Instances | H100 (FP8) - H100-2 (FP16) |
| Context size | 4,096 tokens |
@@ -55,7 +54,7 @@ curl -s \
-H "Content-Type: application/json" \
--request POST \
--url "https://.ifr.fr-par.scaleway.com/v1/chat/completions" \
---data '{"model":"wizardlm-70B-V1.0", "messages":[{"role": "user","content": "Say hello to Scaleway's Inference"}], "max_tokens": 200, "top_p": 1, "temperature": 1, "stream": false}'
+--data '{"model":"wizardlm/wizardlm-70b-v1.0:fp8", "messages":[{"role": "user","content": "Say hello to Scaleway's Inference"}], "max_tokens": 200, "top_p": 1, "temperature": 1, "stream": false}'
```
Make sure to replace `` and `` with your actual [IAM API key](/identity-and-access-management/iam/how-to/create-api-keys/) and the Deployment UUID you are targeting.
diff --git a/bare-metal/apple-silicon/troubleshooting/cant-create-apple-account.mdx b/bare-metal/apple-silicon/troubleshooting/cant-create-apple-account.mdx
new file mode 100644
index 0000000000..a303f3369d
--- /dev/null
+++ b/bare-metal/apple-silicon/troubleshooting/cant-create-apple-account.mdx
@@ -0,0 +1,50 @@
+---
+meta:
+ title: Troubleshooting account creation for hosted Mac minis
+ description: This page suggests solutions for when you cannot create an Apple Account directly from your hosted Mac mini
+content:
+ h1: Troubleshooting account creation for hosted Mac minis
+ paragraph: This page suggests solutions for when you cannot create an Apple Account directly from your hosted Mac mini
+tags: apple-id apple account creation issues
+dates:
+ validation: 2024-10-24
+ posted: 2024-10-24
+categories:
+ - bare-metal
+---
+
+An Apple Account is required for accessing Apple services such as the App Store, iCloud, iMessage, FaceTime, and more.
+It serves as a unique account used to authenticate your identity and connect you to the Apple ecosystem.
+
+However, you might encounter issues creating an Apple Account directly from your hosted Mac mini, especially if too many Apple Accounts have already been created on it.
+
+Apple has implemented [a limit on the number of Apple Accounts](https://support.apple.com/en-us/101661) that can be created from a single device. If you are unable to create an Apple Account on your Mac mini due to this restriction, you can still create one through the Apple website and then sign in on your machine.
+
+### Creating an Apple Account via the Apple website
+
+If you are unable to create an Apple Account on your hosted Mac mini, follow these steps to create one through the Apple website:
+
+1. Open your web browser and navigate to the Apple Account creation page: [https://account.apple.com/account](https://account.apple.com/account).
+2. Click **Create Your Apple Account**. You will be redirected to a page where you can start the registration process.
+3. Fill in your personal information:
+ - First and Last Name: Enter your full name.
+ - Country/Region: Select your country or region from the dropdown menu.
+ - Birthdate: Enter your date of birth.
+ - Email Address: Enter your existing email address. This will be your new Apple Account.
+ - Password: Create a strong and secure password for your account.
+ - Phone number: Enter your phone number. It will be used for two factor authentication and account recovery.
+ Verify your information and click **Continue** to proceed.
+ Apple will send a verification email to the address you provided.
+ 4. Open the email and follow the instructions to verify your account.
+5. After verification, agree to Apple's terms and conditions to complete the process. Tick the checkbox and click on the corresponding button to agree.
+6. Return to your hosted Mac mini and sign in using your new credentials, having successfully created your Apple Account.
+
+
+ The information provided above is for reference only. For detailed instructions or if you encounter any issues while creating your Apple Account, refer to [Apple's official documentation](https://support.apple.com/en-us/108647) on how to create a new Apple Account.
+
+
+### Further troubleshooting
+
+- If you encounter errors while creating an Apple Account on your hosted Mac mini (e.g., "Too many Apple Accounts created on this device"), use the method described above to create your Apple Account through the website.
+- If the problem persists after creating your Apple Account on the website, try signing in from a different device or contact [Apple Support](https://support.apple.com/) for further assistance.
+- For more details, you can visit [Apple's Support Page](https://support.apple.com/en-us/108647) on Apple Accounts.
diff --git a/bare-metal/dedibox/how-to/configure-failover-ip.mdx b/bare-metal/dedibox/how-to/configure-failover-ip.mdx
index 0891d36a0e..bf7941f960 100644
--- a/bare-metal/dedibox/how-to/configure-failover-ip.mdx
+++ b/bare-metal/dedibox/how-to/configure-failover-ip.mdx
@@ -1,13 +1,13 @@
---
meta:
- title: How to configure a failover IP
- description: This page explains configure a failover IP
+ title: How to configure a failover IP on a Scaleway Dedibox
+ description: This page explains configure a failover IP on a Scaleway Dedibox
content:
- h1: How to configure a failover IP
- paragraph: This page explains configure a failover IP
+ h1: How to configure a failover IP on a Scaleway Dedibox
+ paragraph: This page explains configure a failover IP on a Scaleway Dedibox
tags: dedibox failover ip failover-ip failover-ip
dates:
- validation: 2024-04-08
+ validation: 2024-10-21
posted: 2022-04-13
---
diff --git a/bare-metal/dedibox/how-to/install-dedibox.mdx b/bare-metal/dedibox/how-to/install-dedibox.mdx
index 3b4b7744e6..2b617df5e7 100644
--- a/bare-metal/dedibox/how-to/install-dedibox.mdx
+++ b/bare-metal/dedibox/how-to/install-dedibox.mdx
@@ -1,13 +1,13 @@
---
meta:
title: How to install a Dedibox
- description: This page explains how to install a Dedibox
+ description: This page explains how to install a Scaleway Dedibox
content:
h1: How to install a Dedibox
- paragraph: This page explains how to install a Dedibox
+ paragraph: This page explains how to install a Scaleway Dedibox
tags: dedibox install
dates:
- validation: 2024-04-08
+ validation: 2024-10-21
posted: 2022-01-31
---
diff --git a/bare-metal/elastic-metal/how-to/configure-disk-partitions.mdx b/bare-metal/elastic-metal/how-to/configure-disk-partitions.mdx
index 3838afbdf5..25788b95a0 100644
--- a/bare-metal/elastic-metal/how-to/configure-disk-partitions.mdx
+++ b/bare-metal/elastic-metal/how-to/configure-disk-partitions.mdx
@@ -163,7 +163,7 @@ Below is an example of how to define a partitioning schema with RAID and NVMe di
- Disks:
- Each disk is specified with its device path (e.g., `/dev/nvme0n1` or `/dev/nvme1n1`).
- - Partitions are defined with labels like `swap`, `boot`, `root`, `data`, and an optional `uefi` partition for systems using UEFI.
+ - Partitions are defined with labels. The default value is `unknown_partition_label`, and possible values are: `uefi`, `legacy`, `root`, `boot`, `swap`, `data`, `home`, `raid`. Refer to the [API documentation](https://www.scaleway.com/en/developers/api/elastic-metal/#path-servers-install-an-elastic-metal-server) for full details.
- Each partition has a `number` and `size` in bytes.
- RAID (Optional):
@@ -224,4 +224,4 @@ If you prefer a simpler configuration without RAID or ZFS, you can remove the `r
"lvm": null
}
}
-```
\ No newline at end of file
+```
diff --git a/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx b/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx
index 7c36054b22..5d9fefba94 100644
--- a/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx
+++ b/changelog/container-registry/january2022/2022-01-02-registry-changed-backend.mdx
@@ -1,5 +1,5 @@
---
-title: Migration to the new S3 backend (HIVE) for all regions
+title: Migration to the new Object Storage backend (HIVE) for all regions
status: changed
author:
fullname: 'Join the #container-registry channel on Slack.'
@@ -9,4 +9,4 @@ category: containers
product: container-registry
---
-All regions were migrated to the new S3 backend (HIVE) and are now using its highly redundant #MultiAZ infrastructure in `FR-PAR`. As a result, almost all recent issues regarding the registry are resolved.
\ No newline at end of file
+All regions were migrated to the new Object Storage backend (HIVE) and are now using its highly redundant #MultiAZ infrastructure in `FR-PAR`. As a result, almost all recent issues regarding the registry are resolved.
\ No newline at end of file
diff --git a/changelog/october2024/2024-10-17-transactional-email-changed-increase-in-minimum-quotas.mdx b/changelog/october2024/2024-10-17-transactional-email-changed-increase-in-minimum-quotas.mdx
new file mode 100644
index 0000000000..5454431049
--- /dev/null
+++ b/changelog/october2024/2024-10-17-transactional-email-changed-increase-in-minimum-quotas.mdx
@@ -0,0 +1,14 @@
+---
+title: Increase in default email sending quotas
+status: changed
+author:
+ fullname: 'Join the #transactional-email channel on Slack.'
+ url: 'https://slack.scaleway.com'
+date: 2024-10-17
+category: managed-services
+product: transactional-email
+---
+
+We have updated the default email sending quotas to better align with your needs. The new limits are available on the [capabilities and limits](/managed-services/transactional-email/reference-content/tem-capabilities-and-limits/) page. These changes offer greater flexibility and optimization capabilities for managing your email volumes.
+
+
diff --git a/changelog/october2024/2024-10-23-elastic-metal-added-disk-partitioning-configuration-now.mdx b/changelog/october2024/2024-10-23-elastic-metal-added-disk-partitioning-configuration-now.mdx
new file mode 100644
index 0000000000..0147009e42
--- /dev/null
+++ b/changelog/october2024/2024-10-23-elastic-metal-added-disk-partitioning-configuration-now.mdx
@@ -0,0 +1,13 @@
+---
+title: Disk partitioning configuration now available!
+status: added
+author:
+ fullname: 'Join the #elastic-metal channel on Slack.'
+ url: 'https://slack.scaleway.com'
+date: 2024-10-23
+category: bare-metal
+product: elastic-metal
+---
+
+You can now partition the disk capacity of your server as needed during server setup by directly editing the JSON file in the Scaleway console. Eligible servers are marked with the green 'NEW' badge.
+Refer to the [dedicated page](/bare-metal/elastic-metal/how-to/configure-disk-partitions/) for more details on custom partitioning.
diff --git a/changelog/october2024/2024-10-25-managed-inference-added-support-for-function-calling.mdx b/changelog/october2024/2024-10-25-managed-inference-added-support-for-function-calling.mdx
new file mode 100644
index 0000000000..7de0a9ef36
--- /dev/null
+++ b/changelog/october2024/2024-10-25-managed-inference-added-support-for-function-calling.mdx
@@ -0,0 +1,17 @@
+---
+title: Support for function calling
+status: added
+author:
+ fullname: 'Join the #ai channel on Slack.'
+ url: 'https://slack.scaleway.com'
+date: 2024-10-25
+category: ai-data
+product: managed-inference
+---
+
+Function calling allows a large language model (LLM) to interact with external tools or APIs.
+
+Parameters `tools` and `tool_choice` of our OpenAI-compatible chat API are now accepted for models with this capacity.
+
+Read [our dedicated documentation](https://www.scaleway.com/en/docs/ai-data/managed-inference/reference-content/function-calling-support/) and [tutorial to get started](https://www.scaleway.com/en/docs/tutorials/building-ai-application-function-calling)!
+
diff --git a/components/docs-editor.mdx b/components/docs-editor.mdx
index 26da9e23e1..626c7df263 100644
--- a/components/docs-editor.mdx
+++ b/components/docs-editor.mdx
@@ -259,7 +259,7 @@ At top of `.mdx` file, you MUST add data in frontmatter:
```
---
-title: Migration to the new S3 backend (HIVE) for all regions
+title: Migration to the new Object Storage backend (HIVE) for all regions
status: changed
author:
fullname: 'Join the #container-registry channel on Slack.'
diff --git a/compute/instances/api-cli/snapshot-import-export-feature.mdx b/compute/instances/api-cli/snapshot-import-export-feature.mdx
index 9dd5a91819..c7d0305e8c 100644
--- a/compute/instances/api-cli/snapshot-import-export-feature.mdx
+++ b/compute/instances/api-cli/snapshot-import-export-feature.mdx
@@ -35,7 +35,7 @@ More information on the QCOW2 file format, and how to use it can be found in the
1. Create a Scaleway Object Storage bucket.
- You need an S3 bucket to export your QCOW2 file into. Any bucket that belongs to the same project as the snapshot can be used. However, if you do not have one already, you can [create it](/storage/object/how-to/create-a-bucket/) in the console.
+ You need an Object Storage bucket to export your QCOW2 file into. Any bucket that belongs to the same project as the snapshot can be used. However, if you do not have one already, you can [create it](/storage/object/how-to/create-a-bucket/) in the console.
2. Create a snapshot from a volume.
To use this functionality, you must [create a snapshot](/compute/instances/how-to/create-a-snapshot/#how-to-create-a-snapshot) from the volume you want to export.
@@ -53,7 +53,7 @@ More information on the QCOW2 file format, and how to use it can be found in the
- The secret key of your API key pair (``)
- The snapshot ID (``)
- The name of the Object Storage bucket to store the snapshot (which has to exist in the same Scaleway region as the snapshot)
- - A key (can be any acceptable key/object name for Scaleway S3 (suffixing qcow2 images with `.qcow2`))
+ - A key (can be any acceptable key/object name for Scaleway Object Storage (suffixing qcow2 images with `.qcow2`))
The API returns an output as in the following example:
```json
diff --git a/compute/instances/how-to/create-an-instance.mdx b/compute/instances/how-to/create-an-instance.mdx
index 876e07de7e..0694ea47ac 100644
--- a/compute/instances/how-to/create-an-instance.mdx
+++ b/compute/instances/how-to/create-an-instance.mdx
@@ -31,29 +31,28 @@ Select a tab below for instructions on how to create an Instance via either our
1. Click **Instances** in the **Compute** section of the side menu. The [Instance dashboard](https://console.scaleway.com/instance/servers) displays.
2. Click **Create Instance**. The [Instance creation page](https://console.scaleway.com/instance/servers) displays.
3. Complete the following steps:
- - Choose an **Availability Zone**, which represents the geographical region where your Instance will be deployed.
- - Choose an **Instance type**.
+ - **Choose an Availability Zone**, which represents the geographical region where your Instance will be deployed.
+ - **Choose an Instance type**.
Instance offers vary in pricing, processing power, memory, storage, and bandwidth. [Discover the best Instance type for your needs](/compute/instances/reference-content/choosing-instance-type/).
- - Choose an **Image** to run on your Instance.
+ - **Choose an image** to run on your Instance.
This can be an operating system, an InstantApp, or a custom image. [Check all available Linux distributions and InstantApps](/compute/instances/reference-content/images-and-instantapps/).
- - Add **Volumes**, which are storage spaces used by your Instances.
- - For **GP1 Instances** you can leave the default settings of maximum local storage, or choose how much [local](/compute/instances/concepts/#local-volumes) and/or [block](/compute/instances/concepts/#block-volumes) storage you want. Your **system volume** is the volume on which your Instance will boot. The system volume can be either a local or a block volume.
- - **PLAY2**, **PRO2**, and **Enterprise** Instances boot directly [on block volumes](/compute/instances/concepts/#boot-on-block). You can add several block volumes and define how much storage you want for each.
+ - **Name your Instance**, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
+ - **Add volumes**, which are storage spaces used by your Instances. A block volume with a default name and 5,000 IOPS is automatically provided for your system volume. You can customize this volume and attach up to 16 local and/or block type volumes as needed.
- - Ensure that a volume with an OS image has a minimum capacity of 10 GB. For a GPU OS, the recommended size is 125 GB.
+ - Ensure that the volume containing your OS image has a minimum size of 10 GB. For a GPU OS, the recommended size is 125 GB.
- When multiple Block Storage volumes are linked to your Instance, the primary volume will host the OS and is essential for booting the Instance. Once the Instance is created can [modify your boot volume](/compute/instances/how-to/use-boot-modes/#how-to-change-the-boot-volume).
- Booting from a volume that either lacks an OS or is among multiple volumes with identical operating systems can lead to inconsistent boot outcomes.
- - Configure the network of the Instance. You can either select to use **Routed public IP** (a dedicated public IP address routed to your Instance that allows direct communication between the Instance and the Internet) or a **NAT public IP** (a public IP address that uses a carrier-grade NAT to translate the Instances NAT IP address). If you are unsure which to use, we recommend a routed public IP for ease of use and improved performance.
+ - **Configure network** of the Instance.
- Leave the checkbox ticked to assign a **Public IPv4** to the Instance. You can either allocate a new IPv4 address or select one or multiple existing IPv4s. Alternatively, uncheck the box if you do not want an IPv4.
- Leave the checkbox ticked to assign a **Public IPv6** to the Instance. You can either allocate a new IPv6 address or select one or multiple existing IPv6s. Alternatively, uncheck the box if you do not want an IPv6.
- You can attach up to 5 IPs to an Instance, combining IPv4 and IPv6 addresses.
+ You can attach up to 5 IPs to an Instance, combining IPv4 and IPv6 addresses, which is useful for running different services or applications on the same Instance.
- - Enter a **Name** for your Instance, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
- - Click **Advanced options** if you want to configure a [cloud-init configuration](/compute/instances/concepts/#cloud-init). Otherwise, leave these options at their default values.
- - Verify the [SSH keys](/console/account/concepts/#ssh-key) that will give you access to your Instance.
- - Verify the **Estimated cost** of your Instance, based on the specifications you chose.
+ - (Optional) Click **Advanced options** to configure a [cloud-init configuration](/compute/instances/concepts/#cloud-init). Otherwise, leave these options at their default values.
+ You can configure a cloud-init script to automate Instance setup, such as setting up software, users, and system configurations at the first boot.
+ - **Verify the [SSH keys](/console/account/concepts/#ssh-key)** that will give you access to your Instance.
+ - **Verify the Estimated cost** of your Instance, based on the specifications you chose.
4. Click **Create Instance**. The creation of your Instance begins, and you will be informed when the Instance is ready.
Your Instance is now created, and you are redirected to the **Overview** tab. From here, you can see information including your Instance's Public IP, the SSH command to use to [connect to it](/compute/instances/how-to/create-an-instance/), and other information, settings, and actions for the Instance.
@@ -72,23 +71,24 @@ Select a tab below for instructions on how to create an Instance via either our
2. Click **Create Instance**. The [Instance creation page](https://console.scaleway.com/instance/servers) displays.
3. Complete the following steps:
- Choose an **Availability Zone**, which represents the geographical region where your Instance will be deployed.
- - Choose a **POP2-WIN** Instance type from the **Production-Optimized** range.
- - Choose a **Windows Server** Image to run on your Instance.
- - Add **Volumes**, which are storage spaces used by your Instances. You can add several block volumes and define how much storage you want for each.
+ - **Choose a POP2-WIN** Instance type from the **Production-Optimized** range.
+ - **Choose a Windows Server image** to run on your Instance.
+ - **Name your Instance**, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
+ - **Add volumes**, which are storage spaces used by your Instances. A block volume with a default name and 5,000 IOPS is automatically provided for your system volume. You can customize this volume and attach up to 16 local and/or block type volumes as needed.
- - Ensure that a volume with a Windows image has a minimum capacity of 25 GB.
+ - Ensure that a volume containing a Windows Server image has a minimum capacity of 25 GB.
- When multiple Block Storage volumes are linked to your Instance, the primary volume will host the OS and is essential for booting the Instance. Once the Instance is created can [modify your boot volume](/compute/instances/how-to/use-boot-modes/#how-to-change-the-boot-volume).
- Booting from a volume that either lacks an OS or is among multiple volumes with identical operating systems can lead to inconsistent boot outcomes.
- - Configure the network of the Instance. You can either select to use **Routed public IP** (a dedicated public IP address routed to your Instance that allows direct communication between the Instance and the Internet) or a **NAT public IP** (a public IP address that uses a carrier-grade NAT to translate the Instances NAT IP address). If you are unsure which to use, we recommend a routed public IP for ease of use and improved performance.
+ - **Configure network** of the Instance.
- Leave the checkbox ticked to assign a **Public IPv4** to the Instance. You can either allocate a new IPv4 address or select one or multiple existing IPv4s. Alternatively, uncheck the box if you do not want an IPv4.
- Leave the checkbox ticked to assign a **Public IPv6** to the Instance. You can either allocate a new IPv6 address or select one or multiple existing IPv6s. Alternatively, uncheck the box if you do not want an IPv6.
- You can attach up to 5 IPs to an Instance, combining IPv4 and IPv6 addresses.
+ You can attach up to 5 IPs to an Instance, combining IPv4 and IPv6 addresses, which is useful for running different services or applications on the same Instance.
- - Enter a **Name** for your Instance, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
- - Click **Advanced options** if you want to configure a [cloud-init configuration](/compute/instances/concepts/#cloud-init). Otherwise, leave these options at their default values.
- - Choose the [RSA SSH key](/identity-and-access-management/organizations-and-projects/how-to/create-ssh-key/#how-to-generate-a-rsa-ssh-key-pair) that will give you access to your Instance. If you do not have an RSA SSH key yet, click **Add RSA SSH key** and follow the steps indicated.
+ - (Optional) Click **Advanced options** to configure a [cloud-init configuration](/compute/instances/concepts/#cloud-init). Otherwise, leave these options at their default values.
+ You can configure a cloud-init script to automate Instance setup, such as setting up software, users, and system configurations at the first boot.
+ - **Choose the [RSA SSH key](/identity-and-access-management/organizations-and-projects/how-to/create-ssh-key/#how-to-generate-a-rsa-ssh-key-pair)** that will give you access to your Instance. If you do not have an RSA SSH key yet, click **Add RSA SSH key** and follow the steps indicated.
- Verify the **Estimated cost** of your Instance, based on the specifications you chose.
4. Click **Create Instance**. The creation of your Instance begins, and you will be informed when the Instance is ready.
diff --git a/compute/instances/quickstart.mdx b/compute/instances/quickstart.mdx
index 8e6413c72d..2fd268fd79 100644
--- a/compute/instances/quickstart.mdx
+++ b/compute/instances/quickstart.mdx
@@ -23,34 +23,33 @@ Scaleway [Instances](/compute/instances/concepts/#instance) are computing units
## How to create an Instance
-1. Click **Instances** in the **Compute** section of the side menu. The [Instance dashboard](https://console.scaleway.com/instance/servers) displays.
-2. Click **Create Instance**. The [Instance creation page](https://console.scaleway.com/instance/servers) displays.
-3. Complete the following steps:
- - Choose an **Availability Zone**, which represents the geographical region where your Instance will be deployed.
- - Choose an **Instance type**.
- Instance offers vary in pricing, processing power, memory, storage, and bandwidth. [Discover the best Instance type for your needs](/compute/instances/reference-content/choosing-instance-type/).
- - Choose an **Image** to run on your Instance.
- This can be an operating system, an InstantApp, or a custom image. [Check all available Linux distributions and InstantApps](/compute/instances/reference-content/images-and-instantapps/).
- - Add **Volumes**, which are storage spaces used by your Instances.
- - For **GP1 Instances** you can leave the default settings of maximum local storage, or choose how much [local](/compute/instances/concepts/#local-volumes) and/or [block](/compute/instances/concepts/#block-volumes) storage you want. Your **system volume** is the volume on which your Instance will boot. The system volume can be either a local or a block volume.
- - **PLAY2**, **PRO2**, and **Enterprise** Instances boot directly [on block volumes](/compute/instances/concepts/#boot-on-block). You can add several block volumes and define how much storage you want for each.
-
- - Ensure that a volume with an OS image has a minimum capacity of 10 GB. For a GPU OS, the recommended size is 125 GB.
- - The minimum volume size for Microsoft Windows OS is 25 GB.
- - When multiple Block Storage volumes are linked to your Instance, the primary volume will host the OS and is essential for booting the Instance. Once the Instance is created can [modify your boot volume](/compute/instances/how-to/use-boot-modes/#how-to-change-the-boot-volume).
- - Booting from a volume that either lacks an OS or is among multiple volumes with identical operating systems can lead to inconsistent boot outcomes.
-
- - Configure the network of the Instance. You can either select to use **Routed public IP** (a dedicated public IP address routed to your Instance that allows direct communication between the Instance and the Internet) or a **NAT public IP** (a public IP address that uses a carrier-grade NAT to translate the Instances NAT IP address). If you are unsure which to use, we recommend a routed public IP for ease of use and improved performance.
- - Leave the checkbox ticked to assign a **Public IPv4** to the Instance. You can either allocate a new IPv4 address or select one or multiple existing IPv4s. Alternatively, uncheck the box if you do not want an IPv4.
- - Leave the checkbox ticked to assign a **Public IPv6** to the Instance. You can either allocate a new IPv6 address or select one or multiple existing IPv46. Alternatively, uncheck the box if you do not want an IPv4.
-
- You can attach up to 5 IPs to an Instance, combining IPv4 and IPv6 addresses.
-
- - Enter a **Name** for your Instance, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
- - Click **Advanced options** if you want to configure a [cloud-init configuration](/compute/instances/concepts/#cloud-init). Otherwise, leave these options at their default values.
- - Verify the [SSH keys](/console/account/concepts/#ssh-key) that will give you access to your Instance.
- - Verify the **Estimated cost** of your Instance, based on the specifications you chose.
-4. Click **Create Instance**. The creation of your Instance begins, and you will be informed when the Instance is ready.
+ 1. Click **Instances** in the **Compute** section of the side menu. The [Instance dashboard](https://console.scaleway.com/instance/servers) displays.
+ 2. Click **Create Instance**. The [Instance creation page](https://console.scaleway.com/instance/servers) displays.
+ 3. Complete the following steps:
+ - **Choose an Availability Zone**, which represents the geographical region where your Instance will be deployed.
+ - **Choose an Instance type**.
+ Instance offers vary in pricing, processing power, memory, storage, and bandwidth. [Discover the best Instance type for your needs](/compute/instances/reference-content/choosing-instance-type/).
+ - ***Choose an image** to run on your Instance.
+ This can be an operating system, an InstantApp, or a custom image. [Check all available Linux distributions and InstantApps](/compute/instances/reference-content/images-and-instantapps/).
+ - **Name your Instance**, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
+ - **Add Volumes**, which are storage spaces used by your Instances. A block volume with a default name and 5,000 IOPS is automatically provided for your system volume. You can customize this volume and attach up to 16 local and/or block type volumes as needed.
+
+ - Ensure that the volume containing your OS image has a minimum size of 10 GB. For a GPU OS, the recommended size is 125 GB.
+ - When multiple Block Storage volumes are linked to your Instance, the primary volume will host the OS and is essential for booting the Instance. Once the Instance is created can [modify your boot volume](/compute/instances/how-to/use-boot-modes/#how-to-change-the-boot-volume).
+ - Booting from a volume that either lacks an OS or is among multiple volumes with identical operating systems can lead to inconsistent boot outcomes.
+
+ - **Configure network** of the Instance.
+ - Leave the checkbox ticked to assign a **Public IPv4** to the Instance. You can either allocate a new IPv4 address or select one or multiple existing IPv4s. Alternatively, uncheck the box if you do not want an IPv4.
+ - Leave the checkbox ticked to assign a **Public IPv6** to the Instance. You can either allocate a new IPv6 address or select one or multiple existing IPv6s. Alternatively, uncheck the box if you do not want an IPv6.
+
+ You can attach up to 5 IPs to an Instance, combining IPv4 and IPv6 addresses, which is useful for running different services or applications on the same Instance.
+
+ - (Optional) Click **Advanced options** to configure a [cloud-init configuration](/compute/instances/concepts/#cloud-init). Otherwise, leave these options at their default values.
+ You can configure a cloud-init script to automate Instance setup, such as setting up software, users, and system configurations at the first boot.
+ - **Verify the [SSH keys](/console/account/concepts/#ssh-key)** that will give you access to your Instance.
+ - **Verify the Estimated cost** of your Instance, based on the specifications you chose.
+ 4. Click **Create Instance**. The creation of your Instance begins, and you will be informed when the Instance is ready.
+ Once the Instance is created, you can connect to it using the SSH keys you have configured, and begin setting up your applications
## How to connect to an Instance
diff --git a/compute/instances/reference-content/add-instance-specific-ssh-keys-using-tags.mdx b/compute/instances/reference-content/add-instance-specific-ssh-keys-using-tags.mdx
index e91082f8a3..f0073fdafe 100644
--- a/compute/instances/reference-content/add-instance-specific-ssh-keys-using-tags.mdx
+++ b/compute/instances/reference-content/add-instance-specific-ssh-keys-using-tags.mdx
@@ -10,7 +10,7 @@ categories:
dates:
validation: 2024-10-08
posted: 2024-10-08
-tags: Instance ssh-key ssh tag
+tags: instance ssh-key ssh tag
---
In cloud environments, managing SSH keys across multiple Instances is key to keeping your infrastructure secure and easy to access.
diff --git a/compute/instances/reference-content/understanding-automatic-network-hot-reconfiguration.mdx b/compute/instances/reference-content/understanding-automatic-network-hot-reconfiguration.mdx
new file mode 100644
index 0000000000..ef266ad380
--- /dev/null
+++ b/compute/instances/reference-content/understanding-automatic-network-hot-reconfiguration.mdx
@@ -0,0 +1,82 @@
+---
+meta:
+ title: Understanding automatic network hot-reconfiguration for Scaleway Instances
+ description: Find out how to configure automatic network hot-reconfiguration for Scaleway Instances.
+content:
+ h1: Understanding automatic network hot-reconfiguration for Scaleway Instances
+ paragraph: Find out how to configure automatic network hot-reconfiguration for Scaleway Instances.
+categories:
+ - compute
+dates:
+ validation: 2024-10-29
+ posted: 2024-10-29
+tags: instance network hot-reconfiguration
+---
+
+The Scaleway Instances product includes a feature called **automatic network hot-reconfiguration**.
+
+This mechanism automatically configures or deconfigures a [flexible IP address](/compute/instances/concepts/#flexible-ip) in the guest operating system when it is attached to or detached from an Instance.
+
+This guide explains how to enable or disable the automatic network hot-reconfiguration mechanism on your Instance.
+
+
+ This documentation page does not apply to Instances running the Microsoft Windows operating system.
+
+
+## Supported configurations
+
+Before proceeding, ensure that your operating system supports the target network configuration: refer to Scaleway’s compatibility guidelines on [OS images and flexible IP type combinations](/compute/instances/reference-content/comaptibility-scw-os-images-flexible-ip/).
+
+Starting from **October 10th, 2024**, all GNU/Linux-based operating systems and InstantApp images for Scaleway Instances have automatic network hot-reconfiguration enabled by default.
+
+To verify that the feature is active on your Instance, use the following command:
+
+```bash
+# systemctl is-active scw-net-reconfig.path
+```
+
+If the output is `active`, the feature is enabled and ready to use. If the output is `inactive`, you have to enable it first.
+
+
+### Enabling network hot-reconfiguration
+
+Follow these steps to enable automatic network hot-reconfiguration on a Scaleway Instance where the feature is currently inactive.
+
+1. Enable the QEMU Guest Agent. Refer to Scaleway’s documentation on [enabling the QEMU Guest Agent (GQA)](/compute/instances/reference-content/understanding-qemu-guest-agent/#opting-in) for further details.
+
+2. Install the latest Scaleway ecosystem package.
+
+ - **Fedora / AlmaLinux / RockyLinux / CentOS**
+ ```bash
+ # yum -y --best install scaleway-ecosystem
+ ```
+
+ - **Debian / Ubuntu**
+ ```bash
+ # apt-get update
+ # apt-get -y install scaleway-ecosystem
+ ```
+
+
+ Ensure you install version `0.0.7-1` or higher of the `scaleway-ecosystem` package.
+
+
+3. Enable the automatic network reconfiguration mechanism.
+
+ On Debian and Ubuntu systems, the mechanism typically activates automatically after installing or upgrading the `scaleway-ecosystem` package. However, RedHat-based distributions may require a manual start:
+
+ ```bash
+ # systemctl enable --now scw-net-reconfig.path
+ ```
+
+
+ Rebooting your Instance will also activate network hot-reconfiguration.
+
+
+### Disabling network hot-reconfiguration
+
+If you prefer to prevent automatic network reconfiguration when a flexible IP is attached or detached, run the following command:
+
+ ```bash
+ # systemctl disable --now scw-net-reconfig.path
+ ```
\ No newline at end of file
diff --git a/compute/instances/reference-content/understanding-qemu-guest-agent.mdx b/compute/instances/reference-content/understanding-qemu-guest-agent.mdx
new file mode 100644
index 0000000000..f43cf2cf77
--- /dev/null
+++ b/compute/instances/reference-content/understanding-qemu-guest-agent.mdx
@@ -0,0 +1,113 @@
+---
+meta:
+ title: Understanding the QEMU Guest Agent in Scaleway Instances
+ description: Discover how the QEMU Guest Agent works with Scaleway Instances.
+content:
+ h1: Understanding the QEMU Guest Agent in Scaleway Instances
+ paragraph: Discover how the QEMU Guest Agent works with Scaleway Instances.
+tags: instance qga guemu guest agent
+dates:
+ validation: 2024-10-28
+ posted: 2024-10-28
+categories:
+ - compute
+---
+
+Some features of the Instances product require Scaleway's infrastructure to query or exchange information with your Instance. To enable this communication, a software component must run on the guest operating system: the QEMU Guest Agent (QGA).
+
+This page provides essential insights into this mechanism.
+
+
+ This documentation page does not apply to Instances running the Microsoft Windows operating system.
+
+
+## What are the features provided by QGA?
+
+Running the QEMU Guest Agent (QGA) on your Instance currently enables the following feature:
+
+- **Automatic network reconfiguration** upon flexible IP attachment or detachment [Learn how to enable/disable this feature](/compute/instances/reference-content/understanding-automatic-network-hot-reconfiguration/).
+
+Additional features may be added in the future.
+
+## Checking QGA's status
+
+Since March 1st, 2024, all Scaleway-provided GNU/Linux and InstantApp images for Instances come with QGA pre-installed and enabled by default.
+
+To verify that QGA is running on your Instance, use the following command:
+
+```bash
+# systemctl is-active qemu-guest-agent.service
+```
+
+If the output is `active`, QGA is running, and you are ready to benefit from the associated features. If the output is `inactive`, you may need to install and/or activate QGA.
+
+## Opting in
+
+Follow these steps to enable QGA on an Instance where it is currently inactive.
+
+### Installation
+
+Instances created from images older than March 1st, 2024 may require manual installation of the `qemu-guest-agent` package:
+
+- **Fedora / AlmaLinux / RockyLinux / CentOS**
+
+ ```bash
+ # yum -y --best install qemu-guest-agent
+ ```
+
+- **Debian / Ubuntu**
+
+ ```bash
+ # apt-get update
+ # apt-get -y install qemu-guest-agent
+ ```
+
+### Activation
+
+After installing the package, start the `qemu-guest-agent.service` by either:
+
+- Rebooting your Instance, or
+- Running the following command:
+
+ ```bash
+ # systemctl start qemu-guest-agent.service
+ ```
+
+## Opting Out
+
+Follow these steps to disable QGA and the associated Scaleway features.
+
+### Deactivation
+
+
+ Disabling QGA is not recommended, as doing so also disables all the [Scaleway features](#what-are-the-features-provided-by-qga) it provides.
+
+
+To stop and disable QGA, run:
+
+```bash
+# systemctl stop qemu-guest-agent.service
+# systemctl mask qemu-guest-agent.service
+```
+
+This stops the service and prevents it from starting on subsequent reboots.
+
+### Deinstallation (Optional)
+
+
+ You do not necessarily need to deinstall QGA to opt out. [Deactivating the service](#deactivation) is sufficient.
+
+
+If you prefer to completely remove QGA, ensure the service is stopped first, then run:
+
+- **Fedora / AlmaLinux / RockyLinux / CentOS**
+
+ ```bash
+ # yum -y remove qemu-guest-agent
+ ```
+
+- **Debian / Ubuntu**
+
+ ```bash
+ # apt-get -y purge qemu-guest-agent
+ ```
\ No newline at end of file
diff --git a/compute/instances/troubleshooting/bootscript-eol.mdx b/compute/instances/troubleshooting/bootscript-eol.mdx
index 8bb1b5c1dc..2fb72ee52a 100644
--- a/compute/instances/troubleshooting/bootscript-eol.mdx
+++ b/compute/instances/troubleshooting/bootscript-eol.mdx
@@ -90,10 +90,10 @@ If your Instance is using the bootscript option to boot in normal mode you are i
- #### Create a snapshot of the volume(s) and export it to S3 to retrieve the data
+ #### Create a snapshot of the volume(s) and export it to Object Storage to retrieve the data
1. [Create a snapshot](/compute/instances/how-to/create-a-snapshot/) of the volume using the **l_ssd** type of snapshot.
- 2. [Export](/compute/instances/how-to/snapshot-import-export-feature/) the snapshot to an S3 bucket in the same region as the Instance.
+ 2. [Export](/compute/instances/how-to/snapshot-import-export-feature/) the snapshot to an Object Storage bucket in the same region as the Instance.
3. Retrieve your data from the Object Storage bucket and reuse it at your convenience.
4. Delete the old Instance that was using a bootscript once you have recovered your data.
diff --git a/compute/instances/troubleshooting/cant-connect-ssh.mdx b/compute/instances/troubleshooting/cant-connect-ssh.mdx
index da2e7b2dd2..514104c8e7 100644
--- a/compute/instances/troubleshooting/cant-connect-ssh.mdx
+++ b/compute/instances/troubleshooting/cant-connect-ssh.mdx
@@ -133,4 +133,8 @@ You must upload the content of the public part of the SSH key pair to the Scalew
If you have any difficulties connecting to an Instance after uploading a new public SSH key to your Project, try the following:
- If you cannot connect to your Instance at all via SSH, reboot your Instance from the console and try again.
- If you can connect to your Instance using a previously uploaded SSH key but not the new one, go ahead and connect to your Instance with the old key. Once connected, run the `scw-fetch-ssh-keys --upgrade` command, which launches a script on your Instance to update your SSH keys. You can then check that the new key has been added to the `authorized_keys` file (`~/.ssh/authorized_keys`). Note that this command works only for Instances.
-
\ No newline at end of file
+
+
+## Timeout when trying to connect
+
+You may find the SSH connection attempt times out without connecting. This may be expected behavior if the Instance is attached to a Private Network on which there is also a Public Gateway advertising the default route. See our [dedicated troubleshooting](/network/public-gateways/troubleshooting/cant-connect-to-instance-with-pn-gateway/) page for more help with this issue.
\ No newline at end of file
diff --git a/compute/instances/troubleshooting/fix-long-delays-booting-without-public-ip.mdx b/compute/instances/troubleshooting/fix-long-delays-booting-without-public-ip.mdx
index b09a303030..2f2514f63e 100644
--- a/compute/instances/troubleshooting/fix-long-delays-booting-without-public-ip.mdx
+++ b/compute/instances/troubleshooting/fix-long-delays-booting-without-public-ip.mdx
@@ -1,13 +1,13 @@
---
meta:
title: Fix long delays when booting without a public IP
- description: This page explains how to avoid long delays when booting without a public IP
+ description: This page explains how to avoid long delays when booting a Scaleway Instance without a public IP
content:
h1: Fix long delays when booting without a public IP
- paragraph: This page explains how to avoid long delays when booting without a public IP
+ paragraph: This page explains how to avoid long delays when booting a Scaleway Instance without a public IP
tags: centos-stream rockylinux almalinux network-manager ipv6 routed ip
dates:
- validation: 2024-04-17
+ validation: 2024-10-21
posted: 2024-04-17
categories:
- compute
diff --git a/console/account/reference-content/scaleway-network-information.mdx b/console/account/reference-content/scaleway-network-information.mdx
index 00e8a9ee76..c3257afee9 100644
--- a/console/account/reference-content/scaleway-network-information.mdx
+++ b/console/account/reference-content/scaleway-network-information.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Access detailed network information for Scaleway services.
tags: scaleway ip-range ntp rpn vpn dns
dates:
- validation: 2024-05-14
+ validation: 2024-10-22
posted: 2021-08-20
categories:
- console
@@ -90,15 +90,15 @@ IPv6:
Generic: `ntp.online.net`
-- Primary NTP server: `62.210.16.53` (`ntp1.online.net`)
-- Seconday NTP server: `62.210.16.54` (`ntp2.online.net`)
+- Primary NTP server: `51.159.47.151` (`ntp1.online.net`)
+- Seconday NTP server: `51.158.192.3` (`ntp2.online.net`)
### Rdate server
Generic: `rdate.dedibox.com`
-- Primary rdate server: `62.210.16.53` (`ntp1.online.net`)
-- Seconday rdate server: `62.210.16.54` (`ntp2.online.net`)
+- Primary rdate server: `51.159.47.151` (`ntp1.online.net`)
+- Seconday rdate server: `51.158.192.3` (`ntp2.online.net`)
Backup server: `dedibackup.dedibox.fr`
diff --git a/console/billing/quickstart.mdx b/console/billing/quickstart.mdx
index d88004b8fa..a97bc40952 100644
--- a/console/billing/quickstart.mdx
+++ b/console/billing/quickstart.mdx
@@ -13,8 +13,13 @@ categories:
- billing
---
-Before you can order Scaleway resources, you must add your payment method to your account.
+## Console overview
+
+Follow this guided tour to discover how to use the Billing Space.
+
+
+Before you can order Scaleway resources, you must add your payment method to your account.
- A Scaleway account logged into the [console](https://console.scaleway.com)
diff --git a/containers/kubernetes/api-cli/managing-storage.mdx b/containers/kubernetes/api-cli/managing-storage.mdx
deleted file mode 100644
index 7a9e2aa5bc..0000000000
--- a/containers/kubernetes/api-cli/managing-storage.mdx
+++ /dev/null
@@ -1,102 +0,0 @@
----
-meta:
- title: Managing Block Storage volumes with Scaleway CSI
- description: Learn how to manage Block Storage volumes using Scaleway's CSI driver on Kubernetes Kapsule and Kosmos clusters.
-content:
- h1: Managing Block Storage volumes with Scaleway CSI
- paragraph: Learn how to manage Block Storage volumes using Scaleway's CSI driver on Kubernetes Kapsule and Kosmos clusters.
-tags: block-storage scaleway-csi kubernetes pvc
-dates:
- validation: 2024-09-25
- posted: 2021-08-12
-categories:
- - kubernetes
----
-
-The [Scaleway Block Volume](https://www.scaleway.com/en/block-storage/) Container Storage Interface (CSI) driver is an implementation of the [CSI interface](https://github.com/container-storage-interface/spec/blob/master/spec.md) to provide a way to manage Scaleway Block Volumes through a container orchestration system, like Kubernetes. It is installed by default on every Kubernetes Kapsule and Kosmos cluster.
-
-
-
-- A Scaleway account logged into the [console](https://console.scaleway.com)
-- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
-- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/)
-- Your Scaleway Project or Organization ID
-- [Created](/containers/kubernetes/how-to/create-cluster/) a Kubernetes cluster running on Scaleway Instances (v1.21+)
-
-
- Refer to our video tutorial [Getting Started with Kubernetes Part 4 - Storage](/containers/kubernetes/videos/) to view a visual presentation and step-by-step guidance of how to manage Block Storage volumes on Kubernetes with the Scaleway CSI.
-
-
-## Verification of CSI driver status
-
-To verify if the driver is running, use the following command:
-
-```bash
-kubectl get csidriver
-```
-
-The output of this command provides a quick status update on the CSI plugin within your Kubernetes cluster. For the latest features and enhancements, consider upgrading to [release 0.3](https://github.com/scaleway/scaleway-csi/tree/release-0.3#block-storage-low-latency), which supports **[Block Storage low latency](/storage/block/quickstart/)** volumes.
-
-To identify your current CSI release version, navigate to the [Cockpit interface](/observability/cockpit/how-to/access-grafana-and-managed-dashboards/), specifically the **Kubernetes Cluster - Overview** dashboard.
-
-## Upgrading to CSI version 0.3
-
-### Using the API with curl
-
-You can trigger the migration to SBS-CSI using the following `curl` command:
-
-```bash
-curl "https://api.scaleway.com/k8s/v1/regions/$REGION/clusters/$CLUSTER_ID/migrate-to-sbs-csi" \
--X POST \
--H "X-Auth-Token: $TOKEN"
-```
-
-Replace the placeholders with the following:
-
-- `$REGION`: Your cluster's region (e.g., `fr-par`, `nl-ams`).
-- `$CLUSTER_ID`: Your cluster ID.
-- `$TOKEN`: Your Scaleway API token.
-
-This command will initiate the migration process for your cluster to the new SBS-CSI.
-
-### Using the Scaleway CLI
-
-Alternatively, you can use the Scaleway CLI to perform the migration. Ensure the CLI is installed and configured with your API credentials.
-
-1. Install and configure the Scaleway CLI, if you have not already:
- ```bash
- scw init
- ```
-
-2. Run the migration command:
-
- ```bash
- scw k8s cluster migrate-to-sbs-csi $CLUSTER_ID --region=$REGION
- ```
-
- Replace `$REGION` and `$CLUSTER_ID` with your cluster’s region and ID, respectively.
-
-### Post-migration verification
-
-After initiating the migration, the cluster status will change to _updating_. Once the migration completes, you can verify that the CSI driver has been updated and that the new driver properly handles Persistent Volume Claims (PVCs).
-
-```bash
-kubectl get csidriver
-```
-
-This command will confirm that the migration was successful.
-
-## Going further
-
-* [Creating persistent volumes with Scaleway Block Storage](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#pvc--deployment)
-* [Creating raw block volumes](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#raw-block-volumes)
-* [Importing existing Scaleway volumes](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#importing-existing-scaleway-volumes)
-* [Creating volume snapshots](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#volume-snapshots)
-* [Importing volume snapshots](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#importing-snapshots)
-* [How to crate a storage class](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#different-storageclass)
-* [How to choose a zone for the volumes](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#specify-in-which-zone-the-volumes-are-going-to-be-created)
-* [How to choose the number of IOPS](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#choose-the-number-of-iops)
-
- * `sbs-5k` and `sbs-15k` are pre-configured storage classes designed to meet your IOPS requirements. You can achieve the equivalent of setting `iops:5k` in your custom class.
-
-* [Encrypting volumes](https://github.com/scaleway/scaleway-csi/tree/release-0.3/examples/kubernetes#encrypting-volumes)
diff --git a/containers/kubernetes/how-to/edit-kosmos-cluster.mdx b/containers/kubernetes/how-to/edit-kosmos-cluster.mdx
index 46c8ac7ce2..80d3553236 100644
--- a/containers/kubernetes/how-to/edit-kosmos-cluster.mdx
+++ b/containers/kubernetes/how-to/edit-kosmos-cluster.mdx
@@ -104,10 +104,14 @@ In order to add external nodes to your multi-cloud cluster, you must first [crea
## How to upgrade nodes in a multi-cloud pool in your Kosmos cluster
+
+ Note that the node will reappear with a different node ID. If your automation uses this ID (for instance when you use local PVCs), it will be broken.
+
+
The Kubernetes version of the existing nodes in your multi-cloud pool can be upgraded in place. Your workload will theoretically keep running during the upgrade, but it is best to drain the node before the upgrade.
1. In the Pools section of your Kosmos cluster, click **Upgrade** next to the node pool. This will not cause any of your existing nodes to upgrade, but will instead ensure that any new nodes added to the pool will start up with the newer version.
-2. Run the installer program as you would do for a fresh node install, with the additional option `-self-update`. If the option is not available, redownload the program from S3 bucket.
+2. Run the installer program as you would do for a fresh node install, with the additional option `-self-update`. If the option is not available, redownload the program from the Object Storage bucket.
3. Now the node will register itself with the Apiserver. Once it is ready, you will see the same node with two kubelet versions. The older node should end up `NotReady` after 5m, you can safely delete it with `kubectl`.
4. Detach the older node in Scaleway API.
diff --git a/containers/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx b/containers/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx
new file mode 100644
index 0000000000..82832bb82e
--- /dev/null
+++ b/containers/kubernetes/reference-content/modifying-kernel-parameters-kubernetes-cluster.mdx
@@ -0,0 +1,123 @@
+---
+meta:
+ title: Modifying kernel parameters in a Kubernetes cluster using a DaemonSet
+ description: This guide explains how to modify kernel parameters in a Kubernetes cluster using a DaemonSet
+content:
+ h1: Modifying kernel parameters in a Kubernetes cluster using a DaemonSet
+ paragraph: This guide explains how to modify kernel parameters in a Kubernetes cluster using a DaemonSet
+tags: kubernetes kernel
+dates:
+ validation: 2024-10-24
+ posted: 2024-10-24
+categories:
+ - kubernetes
+---
+
+Kernel parameters control the behavior of the operating system at runtime. They allow you to configure and fine-tune various aspects of the Linux kernel, such as networking, memory management, process handling, and security. These parameters are located in the `/proc/sys` directory on each node and can be dynamically modified at runtime using the `sysctl` command.
+
+This guide outlines how to modify kernel parameters across all nodes in a Kubernetes cluster using a DaemonSet.
+
+## Identifying the kernel parameters to modify
+
+Kernel parameters, managed via the `sysctl` command, are grouped into different categories depending on which part of the kernel they influence:
+
+- **Networking (`net.*`)**: Controls network-related settings such as buffer sizes, TCP/IP settings, and routing.
+ *Example*: `net.ipv4.ip_forward` enables or disables IP packet forwarding, often used in routing scenarios.
+
+- **Memory Management (`vm.*`)**: Manages memory and swap behaviors.
+ *Example*: `vm.swappiness` controls how aggressively the system swaps memory pages to disk.
+
+- **File System (`fs.*`)**: Configures file system-related limits and behaviors.
+ *Example*: `fs.file-max` sets the maximum number of file descriptors the system can allocate.
+
+- **General Kernel Settings (`kernel.*`)**: Configures overall kernel behaviors.
+ *Example*: `kernel.hostname` defines the system’s hostname.
+
+- **Security (`kernel.random.*`, `net.ipv4.conf.*`, etc.)**: Manages security settings such as IP forwarding, source address validation, and firewall rules.
+ *Example*: `net.ipv4.conf.all.rp_filter` enables reverse path filtering for added network security.
+
+- **Process Limits (`kernel.*`)**: Controls limits for processes, such as the maximum number of processes or threads.
+ *Example*: `kernel.pid_max` sets the maximum number of process IDs (PIDs) the system can allocate.
+
+## Creating a DaemonSet to modify kernel parameters
+
+To apply kernel parameter changes across all nodes in the cluster, you can create a Kubernetes DaemonSet that runs privileged pods. This will ensure the changes are applied to every node.
+
+Create a YAML file (e.g., `sysctl-daemonset.yaml`), copy/paste the following content into the file, save it and exit the text editor:
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: sysctl-tuning
+ namespace: kube-system
+ labels:
+ app: sysctl-tuning
+spec:
+ selector:
+ matchLabels:
+ app: sysctl-tuning
+ template:
+ metadata:
+ labels:
+ app: sysctl-tuning
+ spec:
+ hostNetwork: true # Share the host's network namespace for network-related sysctl changes
+ hostPID: true # Access the host's PID namespace for sysctl commands
+ initContainers:
+ - name: sysctl-init # Init container to set sysctl parameters
+ image: busybox:latest
+ command:
+ - /bin/sh
+ - -c
+ - |
+ sysctl -w net.core.rmem_max=7500000 # Set the maximum receive buffer size
+ sysctl -w net.core.wmem_max=7500000 # Set the maximum send buffer size
+ securityContext:
+ privileged: true # Privileged access to modify sysctl settings on the host
+ containers:
+ - name: sleep-container # Main container to keep the pod running
+ image: busybox:latest
+ command:
+ - /bin/sh
+ - -c
+ - sleep infinity # Keep the pod alive indefinitely
+```
+
+## Applying the DaemonSet
+
+To apply the configuration, use the following command:
+
+```bash
+kubectl apply -f sysctl-daemonset.yaml
+```
+
+This command deploys the DaemonSet, which ensures that the kernel parameters are modified on all nodes.
+
+## Verifying changes
+
+To verify that the DaemonSet is running on all nodes, use the following command:
+
+```bash
+kubectl get daemonset -n kube-system
+```
+
+To check if the kernel parameters were successfully updated on a node, SSH into the node and run:
+
+```bash
+ssh
+sysctl net.core.rmem_max
+sysctl net.core.wmem_max
+```
+
+
+ On Scaleway Kapsule SSH access is blocked by default. You need to enable SSH in your security group before connecting to the node. Refer to [How to enable or disable SSH ports on Kubernetes Kapsule cluster nodes](/containers/kubernetes/how-to/enable-disable-ssh/) for further information.
+
+
+## Cleaning up (Optional)
+
+If the DaemonSet is no longer needed after the kernel parameters have been modified, you can delete it with the following command:
+
+```bash
+kubectl delete -f sysctl-daemonset.yaml
+```
diff --git a/containers/kubernetes/reference-content/move-kubernetes-nodes-routed-ip.mdx b/containers/kubernetes/reference-content/move-kubernetes-nodes-routed-ip.mdx
index 814921080d..26060364f2 100644
--- a/containers/kubernetes/reference-content/move-kubernetes-nodes-routed-ip.mdx
+++ b/containers/kubernetes/reference-content/move-kubernetes-nodes-routed-ip.mdx
@@ -7,12 +7,16 @@ content:
paragraph: Safely moving Kubernetes nodes to routed IPs
tags: routed-ip ip-mobility kubernetes kapsule kosmos
dates:
- validation: 2024-06-03
+ validation: 2024-10-17
posted: 2024-04-22
categories:
- kubernetes
---
+
+ The migration of all Kubernetes nodes to routed IP has been successfully completed. No further action is required on your part. This documentation is retained **for reference purposes only**.
+
+
As part of our ongoing efforts to enhance infrastructure capabilities and provide better services to our customers, Scaleway is [introducing routed IPs for all products](https://www.scaleway.com/en/news/routed-ips-are-coming-to-all-scaleway-products/), including Kubernetes (K8s) worker nodes.
One of the standout benefits of routed IPs is their support for [IP mobility](/compute/instances/concepts/#ip-mobility), streamlining IP management and movement within the Scaleway ecosystem. By simplifying the process of reallocating IPs across different products, routed IPs empower users with unprecedented flexibility and control over their network infrastructure.
diff --git a/dedibox-network/ipv6/how-to/configure-ipv6-linux.mdx b/dedibox-network/ipv6/how-to/configure-ipv6-linux.mdx
index 2500805050..012c684894 100644
--- a/dedibox-network/ipv6/how-to/configure-ipv6-linux.mdx
+++ b/dedibox-network/ipv6/how-to/configure-ipv6-linux.mdx
@@ -1,13 +1,13 @@
---
meta:
title: How to configure IPv6 connectivity using systemd-networkd
- description: This page explains how to configure IPv6 connectivity using systemd-networkd.
+ description: This page explains how to configure IPv6 connectivity on a Scaleway Dedibox using systemd-networkd.
content:
h1: How to configure IPv6 connectivity using systemd-networkd
- paragraph: This page explains how to configure IPv6 connectivity using systemd-networkd.
+ paragraph: This page explains how to configure IPv6 connectivity on a Scaleway Dedibox using systemd-networkd.
tags: dedibox ipv6 systemd-networkd
dates:
- validation: 2024-04-15
+ validation: 2024-10-21
posted: 2021-08-03
categories:
- dedibox-network
diff --git a/faq/containerregistry.mdx b/faq/containerregistry.mdx
index 1972da2813..ab404ad2a4 100644
--- a/faq/containerregistry.mdx
+++ b/faq/containerregistry.mdx
@@ -16,6 +16,20 @@ Scaleway Container Registry is a fully managed mutualized Container Registry, de
You can store any docker container image on the Namespace and it is possible to set the visibility of each image towards your needs. It can either be private or public.
The Service is currently available in our `nl-ams` (Amsterdam, The Netherlands), `fr-par` (Paris, France), and `pl-waw` (Poland, Warsaw) Availability Zones.
+## How am I billed for Scaleway Container Registry?
+
+Scaleway Container Registry is billed based on stored images size, and outgoing data transfer.
+
+| | Stored Images | Outgoing data transfer | Incoming data transfert |
+|----------------|-----------------|--------------------------------------------------|-------------------------|
+| Private images | €0.027/GB/month | Inter-regional: €0.033/GB - Intra-regional: free | Free |
+| Public images | Free up to 75GB | Inter-regional: free - Intra-regional: free | Free |
+
+
+- Inter-regional traffic: AMS ↔ PAR, WAW ↔ PAR, or AMS ↔ WAW
+- Intra-regional traffic: PAR ↔ PAR, WAW ↔ WAW, or AMS ↔ AMS
+
+
## Why do I get a message that the namespace is not available?
Each namespace has a unique name in each Availability Zone. If the namespace's name is already taken, it will no longer be available.
diff --git a/faq/objectstorage.mdx b/faq/objectstorage.mdx
index 24ac111baa..cefbf7572a 100644
--- a/faq/objectstorage.mdx
+++ b/faq/objectstorage.mdx
@@ -1,7 +1,7 @@
---
meta:
title: Object Storage FAQ
- description: Discover S3 Object Storage.
+ description: Discover Scaleway Object Storage.
content:
h1: Object Storage
hero: assets/objectstorage.webp
@@ -13,14 +13,14 @@ category: storage
## What is Object Storage?
-Object Storage is a service based on the S3 protocol. It allows you to store any kind of object (documents, images, videos, etc.).
+Object Storage is a service based on the Amazon S3 protocol. It allows you to store any kind of object (documents, images, videos, etc.).
Scaleway provides an integrated UI in the [console](https://console.scaleway.com) for convenience. As browsing infinite storage through the web requires some technical trade-offs, some actions are limited in the console for Object Storage:
- batch deletion is limited to 1000 objects.
- empty files are not reported as empty folders.
-We provide an S3-compatible API for programmatic access or usage with any compatible software. Therefore, we recommend using dedicated tools such as `s3cmd` to manage large data sets.
+We provide an Amazon Amazon S3-compatible API for programmatic access or usage with any compatible software. Therefore, we recommend using dedicated tools such as `s3cmd` to manage large data sets.
## How am I billed for Object Storage?
@@ -283,4 +283,4 @@ Large objects can be uploaded using [multipart uploads](/storage/object/api-cli/
Yes, a best practice is to create a [lifecycle rule](/storage/object/how-to/manage-lifecycle-rules/) targeting all objects in the bucket, or using a filter with an empty prefix.
In this case, all files contained within the selected bucket will have their storage class altered automatically on the dates stipulated by you.
-However, due to S3 Protocol restrictions, a lifecycle rule cannot be created to modify the storage class from Glacier to Standard.
\ No newline at end of file
+However, due to some restrictions on Amazon's S3 protocol, a lifecycle rule cannot be created to modify the storage class from Glacier to Standard.
\ No newline at end of file
diff --git a/faq/serverless-containers.mdx b/faq/serverless-containers.mdx
index 8e6ec6e08d..755f98b150 100644
--- a/faq/serverless-containers.mdx
+++ b/faq/serverless-containers.mdx
@@ -97,6 +97,27 @@ Ensure that your code avoids heavy computations or long-running initialization a
Refer to our dedicated page about [Serverless Containers limitations and configuration restrictions](/serverless/containers/reference-content/containers-limitations/) for more information.
+## Where should I host my container images for deployment ?
+
+
+
+## How can I copy an image from an external registry to Scaleway Container Registry?
+
+You can copy an image from an external registry by [logging in to the Scaleway Container Registry](/containers/container-registry/how-to/connect-docker-cli/) using the Docker CLI, and by copying the image as shown below:
+
+```sh
+docker pull alpine:latest
+docker tag alpine:latest rg.fr-par.scw.cloud/example/alpine:latest
+docker push rg.fr-par.scw.cloud/example/alpine:latest
+```
+
+Alternatively, you can use tools such as [Skopeo](https://github.com/containers/skopeo) to copy the image:
+
+```sh
+skopeo login rg.fr-par.scw.cloud -u nologin -p $SCW_SECRET_KEY
+skopeo copy --override-os linux docker://docker.io/alpine:latest docker://rg.fr-par.scw.cloud/example/alpine:latest
+```
+
## Can I whitelist the IPs of my containers?
Serverless Containers does not yet support Private Networks. However, you can use the Scaleway IP ranges defined at [https://www.scaleway.com/en/peering/](https://www.scaleway.com/en/peering/) on Managed Databases and other products that allow IP filtering.
@@ -126,3 +147,13 @@ Scaleway Serverless Containers does not currently support Scaleway VPC or Privat
To add network restrictions on your resource, consult the [list of prefixes used at Scaleway](https://www.scaleway.com/en/peering/). Serverless resources do not have dedicated or predictable IP addresses.
+## How can I attach Block Storage to a Serverless Container?
+
+Scaleway Serverless Containers do not currently support attaching block storage. These containers are designed to be
+stateless, meaning they do not retain data between invocations. For persistent storage, we recommend using external
+solutions like Scaleway Object Storage.
+
+## Why does my container have an instance running after deployment, even with min-scale 0?
+
+Currently, a new container instance will always start after each deployment, even if there is no traffic and the minimum
+scale is set to 0. This behavior is not configurable at this time.
diff --git a/faq/serverless-functions.mdx b/faq/serverless-functions.mdx
index 73a18eacbd..708e500fe9 100644
--- a/faq/serverless-functions.mdx
+++ b/faq/serverless-functions.mdx
@@ -173,3 +173,14 @@ Upgrading a runtime is highly recommended in case of deprecation, and for runtim
Scaleway Serverless Functions does not currently support Scaleway VPC or Private Networks, though this feature is under development.
To add network restrictions on your resource, consult the [list of prefixes used at Scaleway](https://www.scaleway.com/en/peering/). Note that Serverless resources do not have dedicated or predictable IP addresses.
+
+## How can I attach Block Storage to a Serverless Function?
+
+Scaleway Serverless Functions do not currently support attaching block storage. These functions are designed to be
+stateless, meaning they do not retain data between invocations. For persistent storage, we recommend using external
+solutions like Scaleway Object Storage.
+
+## Why does my function have an instance running after deployment, even with min-scale 0?
+
+Currently, a new function instance will always start after each deployment, even if there is no traffic and the minimum
+scale is set to 0. This behavior is not configurable at this time.
diff --git a/faq/serverless-jobs.mdx b/faq/serverless-jobs.mdx
index 554909feb5..ce721a99ef 100644
--- a/faq/serverless-jobs.mdx
+++ b/faq/serverless-jobs.mdx
@@ -66,7 +66,7 @@ Serverless Jobs are billed on a pay-as-you-go basis, strictly on resource consum
* *Billed resources:* 864 000 - 400 000 = 464 000 GB-s
* *Cost:* 464 000 * €0.0000010 = **€0.47**
* **vCPU consumption**
- * *Allocated vCPU conversion:* 1120mVCPU = 1.12 vCPU
+ * *Allocated vCPU conversion:* 1120 mVCPU = 1.12 vCPU
* *Resource consumption:* 432 000 s * 1.12 vCPU = 483 840 vCPU-s
* *Free tier:* 200 000 vCPU-s
* *Billed resources:* 483 840 - 200 000 = 283 840 vCPU-s
@@ -94,8 +94,33 @@ Scaleway Serverless Jobs is part of the Scaleway ecosystem, it can therefore be
When starting a job, you can use contextual options to define the number of jobs to execute at the same time. Refer to the [dedicated documentation](/serverless/jobs/how-to/run-job/#how-to-run-a-job-with-contextual-options) for more information.
+## Where should I host my jobs images for deployment ?
+
+
+
+## How can I copy an image from an external registry to Scaleway Container Registry?
+
+You can copy an image from an external registry by [logging in to the Scaleway Container Registry](/containers/container-registry/how-to/connect-docker-cli/) using the Docker CLI, and by copying the image as shown below:
+
+```sh
+docker pull alpine:latest
+docker tag alpine:latest rg.fr-par.scw.cloud/example/alpine:latest
+docker push rg.fr-par.scw.cloud/example/alpine:latest
+```
+
+Alternatively, you can use tools such as [Skopeo](https://github.com/containers/skopeo) to copy the image:
+
+```sh
+skopeo login rg.fr-par.scw.cloud -u nologin -p $SCW_SECRET_KEY
+skopeo copy --override-os linux docker://docker.io/alpine:latest docker://rg.fr-par.scw.cloud/example/alpine:latest
+```
+
## How can I configure access to a Private Network?
Scaleway Serverless Jobs does not currently support Scaleway VPC or Private Networks, though this feature is under development.
-To add network restrictions on your resource, consult the [list of prefixes used at Scaleway](https://www.scaleway.com/en/peering/). Serverless resources do not have dedicated or predictable IP addresses.
\ No newline at end of file
+To add network restrictions on your resource, consult the [list of prefixes used at Scaleway](https://www.scaleway.com/en/peering/). Serverless resources do not have dedicated or predictable IP addresses.
+
+## Can I securely use sensitive information with Serverless Jobs?
+
+Yes, you can use sensitive data such as API secret keys, passwords, TLS/SSL certificates, or tokens. Serverless Jobs seamlessly integrates with [Secret Manager](/identity-and-access-management/secret-manager/), which allows you to securely reference sensitive information within your jobs. Refer to the [dedicated documentation](/serverless/jobs/how-to/reference-secret-in-job/) for more information.
diff --git a/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx b/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx
index 755832cf23..4fb19e53ca 100644
--- a/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx
+++ b/identity-and-access-management/iam/api-cli/using-api-key-object-storage.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: Using IAM API keys with Object Storage
paragraph: This page explains how to use IAM API keys with Object Storage
-tags: API key Projects IAM API-key Preferred-project Object-Storage S3
+tags: API key Projects IAM API-key Preferred-project Object-Storage Amazon-S3
dates:
validation: 2024-05-27
posted: 2022-11-02
@@ -15,7 +15,7 @@ categories:
You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com/), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/).
-While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI.
+While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI.
In this document, we explain the concept of preferred Projects for Object Storage, explain how to configure your IAM API key for this, and give some code examples for overriding the preferred Project when making an API call.
@@ -35,13 +35,13 @@ When you generate an API key with IAM, the key is associated with a specific [IA
## The impact of preferred Projects
-When you perform an action on Scaleway Object Storage resources using a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/), you are using tools based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services). This standard interface does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. The preferred Project is specified when creating the API key (or can be edited at a later date).
+When you perform an action on Scaleway Object Storage resources using a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/), you are using tools based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services). This standard interface does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. The preferred Project is specified when creating the API key (or can be edited at a later date).
Setting the preferred Project does not automatically give the API key bearer permissions for Object Storage in this Project. Ensure that the user/application is either the Owner of the Organization, or has a [policy](/identity-and-access-management/iam/concepts/#policy) giving them appropriate permissions for Object Storage in this Project. Note that the application of Object Storage permissions can take up to 5 minutes after creating a new rule or policy.
-When using the S3 CLI:
+When using the AWS S3 CLI:
- An action of listing the buckets (`aws s3 ls`) will list the buckets of the preferred Project
- An action of creating a bucket (`aws s3 mb`) will create a new bucket inside the preferred Project
- An action of moving an object from a bucket to another (`aws s3 mv source destination`) will only work if the source bucket and the destination buckets are in the preferred Project for an API key
diff --git a/identity-and-access-management/iam/concepts.mdx b/identity-and-access-management/iam/concepts.mdx
index 617e1e057b..54c9b71573 100644
--- a/identity-and-access-management/iam/concepts.mdx
+++ b/identity-and-access-management/iam/concepts.mdx
@@ -95,7 +95,7 @@ For each policy rule, you specify one or more permission sets (e.g. "list all In
## Preferred Project
-You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. See our page on [using API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information.
+You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. See our page on [using API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information.
## Principal
diff --git a/identity-and-access-management/iam/reference-content/overview.mdx b/identity-and-access-management/iam/reference-content/overview.mdx
index f18c4a683f..7cb9d1a04f 100644
--- a/identity-and-access-management/iam/reference-content/overview.mdx
+++ b/identity-and-access-management/iam/reference-content/overview.mdx
@@ -7,7 +7,7 @@ content:
paragraph: High-level overview of Scaleway IAM features.
tags: iam
dates:
- validation: 2024-04-01
+ validation: 2024-10-16
categories:
- iam
- console
diff --git a/identity-and-access-management/iam/reference-content/permission-sets.mdx b/identity-and-access-management/iam/reference-content/permission-sets.mdx
index e602197404..885481098d 100644
--- a/identity-and-access-management/iam/reference-content/permission-sets.mdx
+++ b/identity-and-access-management/iam/reference-content/permission-sets.mdx
@@ -6,7 +6,7 @@ content:
h1: Permission sets
paragraph: Explore how to define and manage permission sets for user access control.
dates:
- validation: 2024-04-01
+ validation: 2024-10-23
---
Permissions sets and their scope make up [IAM rules](/identity-and-access-management/iam/concepts/#rule), which define the access rights that a principal (user, group or application) should have. They consist of sets of one or multiple [permissions](/identity-and-access-management/iam/concepts/#permission).
@@ -52,6 +52,7 @@ Below is a list of the permission sets available at Scaleway.
| KubernetesReadOnly | List and read access to Kubernetes |
| KubernetesFullAccess | Full access to create, read, list, edit and delete Kubernetes |
| KubernetesExternalNodeRegister | Attach external nodes to a Kosmos cluster |
+| KubernetesSystemMastersGroupAccess | Gives the Kubernetes system:masters role to perform any action on the cluster |
| DediboxReadOnly | List and read access to Dedibox |
| DediboxFullAccess | Full access to create, read, list, edit and delete Dedibox |
| ContainersReadOnly | List and read access to Containers |
@@ -80,6 +81,14 @@ Below is a list of the permission sets available at Scaleway.
| PrivateNetworksFullAccess | Full access to create, read, list, edit and delete Private Networks |
| VPCGatewayReadOnly | List and read access to Public Gateways |
| VPCGatewayFullAccess | Full access to create, read, list, edit and delete Public Gateways |
+| VPCFullAccess | Full access to VPC |
+| VPCReadOnly | Read access to VPC |
+| AutoscalingFullAccess | Full access to autoscaling |
+| AutoscalingReadOnly | Read access to autoscaling |
+| EdgeServicesFullAccess | Full access to Edge Services |
+| EdgeServicesReadOnly | Read access to Edge Services |
+| IPAMFullAccess | Full access to IPAM |
+| IPAMReadOnly | Read access to IPAM |
| LoadBalancersReadOnly | List and read access to Load Balancer |
| LoadBalancersFullAccess | Full access to create, read, list, edit and delete Load Balancer |
| DomainsDNSReadOnly | List and read access to Domains and DNS |
@@ -96,6 +105,10 @@ Below is a list of the permission sets available at Scaleway.
| TransactionalEmailDomainFullAccess | Full access to domains in Transactional Email. Does not include permissions for e-mails |
| TransactionalEmailEmailReadOnly | Read access to e-mails in Transactional Email. Does not include permissions for domain configuration |
| TransactionalEmailEmailFullAccess | Full access to e-mails in Transactional Email. Does not include permissions for domain configuration |
+| TransactionalEmailWebhookFullAccess | Full access to Webhooks in Transactional Email |
+| TransactionalEmailWebhookReadOnly | Read access to Webhooks in Transactional Email |
+| TransactionalEmailProjectSettingsFullAccess | Full access to Project settings in Transactional Email |
+| TransactionalEmailProjectSettingsReadOnly | Read access to Project settings in Transactional Email |
| WebHostingReadOnly | List and read access to Web Hosting |
| WebHostingFullAccess | Full access to create, read, list, edit and delete Web Hosting |
| SecretManagerReadOnly | List and read secrets' metadata (name, tags, creation date, etc.). Does not include permissions for data (versions) accessing or editing |
@@ -108,3 +121,6 @@ Below is a list of the permission sets available at Scaleway.
| BlockStorageFullAccess | Full access to create, read, list, edit and delete in Block Storage |
+
+ Some additional permission sets may appear on your Scaleway console if you are enrolled in beta testing for products or features.
+
diff --git a/identity-and-access-management/iam/reference-content/policy.mdx b/identity-and-access-management/iam/reference-content/policy.mdx
index a9bd3e1715..25344af96f 100644
--- a/identity-and-access-management/iam/reference-content/policy.mdx
+++ b/identity-and-access-management/iam/reference-content/policy.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Detailed additional content for policies within Scaleway IAM.
tags: iam
dates:
- validation: 2024-04-01
+ validation: 2024-10-16
categories:
- iam
- console
diff --git a/identity-and-access-management/iam/reference-content/reproduce-roles-project-api-keys.mdx b/identity-and-access-management/iam/reference-content/reproduce-roles-project-api-keys.mdx
index bb92ff5e8f..420707f26c 100644
--- a/identity-and-access-management/iam/reference-content/reproduce-roles-project-api-keys.mdx
+++ b/identity-and-access-management/iam/reference-content/reproduce-roles-project-api-keys.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains how to generate an access system similar to Scaleway's roles feature and Project-scoped API keys, that existed before IAM.
tags: iam
dates:
- validation: 2024-04-01
+ validation: 2024-10-16
categories:
- iam
- console
diff --git a/identity-and-access-management/iam/reference-content/users-groups-and-applications.mdx b/identity-and-access-management/iam/reference-content/users-groups-and-applications.mdx
index 79ac30568b..12c99168dd 100644
--- a/identity-and-access-management/iam/reference-content/users-groups-and-applications.mdx
+++ b/identity-and-access-management/iam/reference-content/users-groups-and-applications.mdx
@@ -6,7 +6,7 @@ content:
h1: Users, groups, and applications
paragraph: Manage users, groups, and applications within Scaleway IAM.
dates:
- validation: 2024-04-01
+ validation: 2024-10-16
---
IAM users, groups, and applications are principals in Scaleway Organizations. A principal is an entity that can be attached to policy.
diff --git a/identity-and-access-management/organizations-and-projects/additional-content/organization-quotas.mdx b/identity-and-access-management/organizations-and-projects/additional-content/organization-quotas.mdx
index 3f8b85c27c..300539094d 100644
--- a/identity-and-access-management/organizations-and-projects/additional-content/organization-quotas.mdx
+++ b/identity-and-access-management/organizations-and-projects/additional-content/organization-quotas.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page shows you the quotas associated with your Organization.
tags: account-quotas quotas security-rule security rule
dates:
- validation: 2024-04-10
+ validation: 2024-10-16
posted: 2021-02-10
categories:
- console
@@ -154,7 +154,7 @@ At Scaleway, quotas are applicable per [Organization](/identity-and-access-manag
|-------------|:----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------:|
| Apple silicon M1-M | 1 per [Availability Zone](/compute/instances/concepts/#availability-zone) | 3 per [Availability Zone](/compute/instances/concepts/#availability-zone) |
| Apple silicon M2-M | To use this product, you must [validate your identity](/console/account/how-to/verify-identity/). | 5 per [Availability Zone](/compute/instances/concepts/#availability-zone) |
-| Apple silicon M2-L | To use this product, you must [validate your identity](/console/account/how-to/verify-identity/). | 5 per [Availability Zone](/compute/instances/concepts/#availability-zone) |
+| Apple silicon M2-L | 1 per [Availability Zone](/compute/instances/concepts/#availability-zone) | 5 per [Availability Zone](/compute/instances/concepts/#availability-zone) |
## Elastic Metal
diff --git a/identity-and-access-management/organizations-and-projects/concepts.mdx b/identity-and-access-management/organizations-and-projects/concepts.mdx
index fc7bdc0d3b..8d119e1394 100644
--- a/identity-and-access-management/organizations-and-projects/concepts.mdx
+++ b/identity-and-access-management/organizations-and-projects/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains all the concepts related to Organizations and Projects
tags: access-key organization secret-key ssh-key owner
dates:
- validation: 2024-04-01
+ validation: 2024-10-16
categories:
- console
---
@@ -30,9 +30,9 @@ The Organization ID identifies the [Organization](#organization) created with yo
A Project is a grouping of Scaleway [resources](#resource). Each Scaleway Organization comes with a default Project, and you can create new Projects if necessary. Projects are cross-region, meaning resources located in different [regions](/compute/instances/concepts/#region) can be grouped in one single Project. When grouping resources into different Projects, you can use [IAM](/identity-and-access-management/iam/concepts/#iam) to define custom access rights for each Project.
-## Project Dashboard
+## Project dashboard
-The Project Dashboard can be viewed within the [console](https://console.scaleway.com/project). On this dashboard, you can see an overview of the Project's [resources](#resources), along with the Project's settings and credentials ([SSH keys](#ssh-key)).
+The Project dashboard can be viewed within the [console](https://console.scaleway.com/project). On this dashboard, you can see an overview of the Project's [resources](#resources), along with the Project's settings and credentials ([SSH keys](#ssh-key)).
## Resource
diff --git a/labs/ipfs-naming/concepts.mdx b/labs/ipfs-naming/concepts.mdx
index 4979846163..672fbd6739 100644
--- a/labs/ipfs-naming/concepts.mdx
+++ b/labs/ipfs-naming/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains all concepts related to IPFS Naming
tags: ipfs-naming naming ipfs versioning labs web3 cid
dates:
- validation: 2024-04-08
+ validation: 2024-10-16
categories:
- labs
- naming
diff --git a/labs/ipfs-naming/quickstart.mdx b/labs/ipfs-naming/quickstart.mdx
index f7a798fc5c..566c2c1c7a 100644
--- a/labs/ipfs-naming/quickstart.mdx
+++ b/labs/ipfs-naming/quickstart.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page shows you how to get started with Scaleway IPFS Naming.
tags: ipfs-naming ipfs naming labs web3
dates:
- validation: 2024-04-15
+ validation: 2024-04-23
posted: 2023-10-10
categories:
- labs
diff --git a/labs/ipfs-pinning/concepts.mdx b/labs/ipfs-pinning/concepts.mdx
index 3444a8a9e7..0939f180b8 100644
--- a/labs/ipfs-pinning/concepts.mdx
+++ b/labs/ipfs-pinning/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains all concepts related to IPFS Pinning
tags: ipfs-pinning storage pinning volume ipfs versioning labs web3 cid
dates:
- validation: 2024-04-08
+ validation: 2024-10-16
categories:
- labs
- storage
diff --git a/labs/ipfs-pinning/quickstart.mdx b/labs/ipfs-pinning/quickstart.mdx
index 7b1a8847b7..aa2ce021f5 100644
--- a/labs/ipfs-pinning/quickstart.mdx
+++ b/labs/ipfs-pinning/quickstart.mdx
@@ -64,7 +64,7 @@ This operation allows you to add Scaleway as a remote service in your IPFS deskt
4. Click **Add a custom one** to add Scaleway as a remote pinning service. A configuration form displays.
5. Fill in the different fields by replacing the parameters in brackets with the relevant information:
- Nickname: `Scaleway`
- - API endpoint: `https:/.ipfs.labs.scw.cloud/`
+ - API endpoint: `https://.ipfs.labs.scw.cloud/`
- Secret access token: `<$SCW_SECRET_KEY>`
| Parameter | Description |
@@ -85,7 +85,8 @@ Scaleway should now appear in the list of remote pinning services.
## How to retrieve your data
-Now that you have pinned your data, you can retrieve it using [CloudFlare](https://www.cloudflare.com/), [Pinata](https://www.pinata.cloud/) or [Protocol Labs](https://protocol.ai/). These are IPFS gateways, which are services that allow you to interact with the IPFS network using regular, HTTP/HTTPS web protocols. This means you do not need a particular IPFS software to retrieve your data - you can instead use your regular web browser.
+Now that you have pinned your data, you can retrieve it using [CloudFlare](https://www.cloudflare.com/), [Pinata](https://www.pinata.cloud/) or [Protocol Labs](https://protocol.ai/).
+These services act as IPFS gateways, enabling you to interact with the IPFS network using standard HTTP/HTTPS web protocols. This means you don’t need specialized IPFS software to retrieve your data; you can simply use your regular web browser.
1. Click the icon next to the CID of the data you want to retrieve. The three IPFS gateways display.
-2. Click the IPFS gateway you wish to retrieve your data from. Your data displays in a new tab.
\ No newline at end of file
+2. Click the IPFS gateway you wish to retrieve your data from. Your data displays in a new tab.
diff --git a/labs/ipfs-pinning/reference-content/install-ipfs-desktop.mdx b/labs/ipfs-pinning/reference-content/install-ipfs-desktop.mdx
index 87c5b5d720..3e5753351e 100644
--- a/labs/ipfs-pinning/reference-content/install-ipfs-desktop.mdx
+++ b/labs/ipfs-pinning/reference-content/install-ipfs-desktop.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page shows you how to use Scaleway IPFS Pinning with IPFS Desktop
tags: ipfs pinning storage ipfs-storage volume desktop labs web3
dates:
- validation: 2024-04-03
+ validation: 2024-10-16
posted: 2023-09-26
categories:
- labs
diff --git a/macros/bare-metal/dedibox-scaleway-migration.mdx b/macros/bare-metal/dedibox-scaleway-migration.mdx
index 300670c9a1..805e506cf2 100644
--- a/macros/bare-metal/dedibox-scaleway-migration.mdx
+++ b/macros/bare-metal/dedibox-scaleway-migration.mdx
@@ -1,5 +1,6 @@
---
macro: dedibox-scaleway-migration
---
-
-Only users who **have completed** the linking of their the Dedibox account to the Scaleway console are granted access to the Dedibox section within the Scaleway console. For detailed information on the migration process, please refer to our [account linking documentation](/bare-metal/dedibox/how-to/link-dedibox-account/). **If you cannot locate the Dedibox link in Scaleway's console side menu, you can use the [Dedibox console](https://console.online.net)** to place orders, manage your Dediboxes, and access the [related documentation](/dedibox/dedicated-servers/quickstart/).
+
+ Only users who **have completed** the linking of their the Dedibox account to the Scaleway console are granted access to the Dedibox section within the Scaleway console. For detailed information on the migration process, please refer to our [account linking documentation](/bare-metal/dedibox/how-to/link-dedibox-account/). **If you cannot locate the Dedibox link in Scaleway's console side menu, you can use the [Dedibox console](https://console.online.net)** to place orders, manage your Dediboxes, and access the [related documentation](/dedibox/dedicated-servers/quickstart/).
+
\ No newline at end of file
diff --git a/macros/serverless/container-registry-note.mdx b/macros/serverless/container-registry-note.mdx
new file mode 100644
index 0000000000..e407cb12c2
--- /dev/null
+++ b/macros/serverless/container-registry-note.mdx
@@ -0,0 +1,6 @@
+---
+macro: container-registry-note
+---
+
+[Scaleway's Container Registry](/containers/container-registry/) allows for a seamless integration with Serverless Containers and Jobs at a [competitive price](/faq/containerregistry/#how-am-i-billed-for-scaleway-container-registry).
+Serverless products support external public registries (such as [Docker Hub](https://hub.docker.com/search?q=)), but we do not recommend using them due to uncontrolled rate limiting, which can lead to failures when starting resources, unexpected usage conditions, and pricing changes.
diff --git a/managed-databases/mongodb/how-to/connect-database-instance.mdx b/managed-databases/mongodb/how-to/connect-database-instance.mdx
index c9d78d5089..c5e9d1f568 100644
--- a/managed-databases/mongodb/how-to/connect-database-instance.mdx
+++ b/managed-databases/mongodb/how-to/connect-database-instance.mdx
@@ -44,7 +44,7 @@ To connect to a public endpoint using the MongoDB® shell:
1. Replace the following variables in the command as described:
```sh
- mongosh "mongodb+srv://{instance_id}.mgdb.{region}.scw.cloud" --tlsCAFile {your_certificate.pem} -u {username
+ mongosh "mongodb+srv://{instance_id}.mgdb.{region}.scw.cloud" --tlsCAFile {your_certificate.pem} -u {username}
```
- `{your-certificate.pem}` - the TLS certificate downloaded on **step 3**.
diff --git a/managed-databases/postgresql-and-mysql/api-cli/verify-ca-postgresql.mdx b/managed-databases/postgresql-and-mysql/api-cli/verify-ca-postgresql.mdx
index 69eadd66b7..523d516847 100644
--- a/managed-databases/postgresql-and-mysql/api-cli/verify-ca-postgresql.mdx
+++ b/managed-databases/postgresql-and-mysql/api-cli/verify-ca-postgresql.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn to verify the CA certificate for PostgreSQL using API/CLI.
tags: verify-ca verify certificate authority postgresql
dates:
- validation: 2024-04-15
+ validation: 2024-10-23
posted: 2023-04-01
categories:
- managed-databases
diff --git a/managed-databases/postgresql-and-mysql/concepts.mdx b/managed-databases/postgresql-and-mysql/concepts.mdx
index 9e261a9415..a528341d8a 100644
--- a/managed-databases/postgresql-and-mysql/concepts.mdx
+++ b/managed-databases/postgresql-and-mysql/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Understand key concepts for Scaleway Managed Databases for PostgreSQL and MySQL.
tags: endpoint allowed-ip clone-feature engine read-replica
dates:
- validation: 2024-04-08
+ validation: 2024-10-16
categories:
- managed-databases
- postgresql-and-mysql
diff --git a/managed-databases/postgresql-and-mysql/how-to/apply-maintenance.mdx b/managed-databases/postgresql-and-mysql/how-to/apply-maintenance.mdx
index a3242021b6..4d9392262b 100644
--- a/managed-databases/postgresql-and-mysql/how-to/apply-maintenance.mdx
+++ b/managed-databases/postgresql-and-mysql/how-to/apply-maintenance.mdx
@@ -7,14 +7,14 @@ content:
paragraph: This page explains how to apply maintenance to a Database Instance
tags: managed-database postgresql mysql database-instance maintenance
dates:
- validation: 2024-04-19
+ validation: 2024-10-23
posted: 2024-04-19
categories:
- managed-databases
- postgresql-and-mysql
---
-From time to time your Scaleway Managed Databases have to undergo maintenance to make sure that your nodes are up-to-date and have all the tools necessary to maintain a healthy lifecycle. For example, your engine version might need to be upgraded to the latest available minor version, or certain patches might need to be implemented.
+From time to time your Scaleway Managed Databases have to undergo maintenance to ensure that your nodes are up-to-date and have all the tools necessary to maintain a healthy lifecycle. For example, your engine version might need to be upgraded to the latest available minor version, or certain patches might need to be implemented.
These maintenance operations are set up, run, and scheduled by Scaleway, but you can select when to apply them to avoid interruptions during peak times.
diff --git a/managed-databases/postgresql-and-mysql/how-to/change-volume-type.mdx b/managed-databases/postgresql-and-mysql/how-to/change-volume-type.mdx
index 5905a95c47..778be44b1d 100644
--- a/managed-databases/postgresql-and-mysql/how-to/change-volume-type.mdx
+++ b/managed-databases/postgresql-and-mysql/how-to/change-volume-type.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains how to change the volume type of your Database
tags: managed-database database volume-type
dates:
- validation: 2024-04-06
+ validation: 2024-10-16
posted: 2021-03-10
categories:
- managed-databases
diff --git a/managed-databases/postgresql-and-mysql/reference-content/autohealing.mdx b/managed-databases/postgresql-and-mysql/reference-content/autohealing.mdx
index d4c248f2b1..bf1c81c2f0 100644
--- a/managed-databases/postgresql-and-mysql/reference-content/autohealing.mdx
+++ b/managed-databases/postgresql-and-mysql/reference-content/autohealing.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Understand the autohealing feature for PostgreSQL and MySQL databases.
tags: databases ha high-availability autohealing database-nodes
dates:
- validation: 2024-04-08
+ validation: 2024-10-16
categories:
- managed-databases
- postgresql-and-mysql
diff --git a/managed-databases/postgresql-and-mysql/reference-content/security-and-reliability.mdx b/managed-databases/postgresql-and-mysql/reference-content/security-and-reliability.mdx
index 6f92b70c90..58ddeebc37 100644
--- a/managed-databases/postgresql-and-mysql/reference-content/security-and-reliability.mdx
+++ b/managed-databases/postgresql-and-mysql/reference-content/security-and-reliability.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn more about shared responsibility in security and reliability practices for Managed Databases for PostgreSQL and MySQL
tags: databases postgresql shared responsibility security reliability
dates:
- validation: 2024-04-08
+ validation: 2024-10-16
categories:
- managed-databases
- postgresql-and-mysql
diff --git a/managed-databases/postgresql-and-mysql/troubleshooting/extension-errors.mdx b/managed-databases/postgresql-and-mysql/troubleshooting/extension-errors.mdx
index 72116ccfd7..30abc60517 100644
--- a/managed-databases/postgresql-and-mysql/troubleshooting/extension-errors.mdx
+++ b/managed-databases/postgresql-and-mysql/troubleshooting/extension-errors.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Troubleshoot extension errors for PostgreSQL databases.
tags: disk-full databases
dates:
- validation: 2024-04-09
+ validation: 2024-10-16
posted: 2024-04-09
categories:
- managed-databases
diff --git a/managed-databases/redis/api-cli/managing-username-and-password.mdx b/managed-databases/redis/api-cli/managing-username-and-password.mdx
index 52bd89a5b1..df72712900 100644
--- a/managed-databases/redis/api-cli/managing-username-and-password.mdx
+++ b/managed-databases/redis/api-cli/managing-username-and-password.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn to manage Redis™ usernames and passwords using API/CLI.
tags: databases user redis username password
dates:
- validation: 2024-04-08
+ validation: 2024-10-16
categories:
- managed-databases
- redis
diff --git a/managed-databases/redis/api-cli/using-pub-sub-feature.mdx b/managed-databases/redis/api-cli/using-pub-sub-feature.mdx
index 609d60af0c..7a143a3b97 100644
--- a/managed-databases/redis/api-cli/using-pub-sub-feature.mdx
+++ b/managed-databases/redis/api-cli/using-pub-sub-feature.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Guide to using the Pub/Sub feature in Redis™ with API/CLI.
tags: databases redis pub/sub messaging broker
dates:
- validation: 2024-04-15
+ validation: 2024-10-23
categories:
- managed-databases
- redis
diff --git a/managed-databases/redis/concepts.mdx b/managed-databases/redis/concepts.mdx
index b4b8b1341b..72ab23716a 100644
--- a/managed-databases/redis/concepts.mdx
+++ b/managed-databases/redis/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains all the concepts related to Managed Database for Redis™.
tags: endpoint redis allowed-ip cluster cluster-mode availability horizontal-scaling tcp tls vertical-scaling
dates:
- validation: 2024-04-15
+ validation: 2024-10-23
categories:
- managed-databases
- redis
diff --git a/managed-databases/redis/reference-content/default-user-permissions.mdx b/managed-databases/redis/reference-content/default-user-permissions.mdx
index 026b15ef09..7c266c64ce 100644
--- a/managed-databases/redis/reference-content/default-user-permissions.mdx
+++ b/managed-databases/redis/reference-content/default-user-permissions.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Reference guide to default user permissions for Redis™ databases.
tags: databases user redis username password
dates:
- validation: 2024-04-15
+ validation: 2024-10-23
categories:
- managed-databases
- redis
diff --git a/managed-databases/redis/reference-content/ensuring-data-persistence.mdx b/managed-databases/redis/reference-content/ensuring-data-persistence.mdx
index 35e6d69f60..4442996020 100644
--- a/managed-databases/redis/reference-content/ensuring-data-persistence.mdx
+++ b/managed-databases/redis/reference-content/ensuring-data-persistence.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn how to ensure data persistence in your Scaleway Redis™ database.
tags: databases user redis username password
dates:
- validation: 2024-04-15
+ validation: 2024-10-23
categories:
- managed-databases
- redis
diff --git a/managed-services/data-lab/index.mdx b/managed-services/data-lab/index.mdx
index 3c04c2aa39..89684521c4 100644
--- a/managed-services/data-lab/index.mdx
+++ b/managed-services/data-lab/index.mdx
@@ -7,7 +7,7 @@ meta:
diff --git a/managed-services/iot-hub/api-cli/iot-hub-routes.mdx b/managed-services/iot-hub/api-cli/iot-hub-routes.mdx
index e195ce3c37..5e1fd555b8 100644
--- a/managed-services/iot-hub/api-cli/iot-hub-routes.mdx
+++ b/managed-services/iot-hub/api-cli/iot-hub-routes.mdx
@@ -9,7 +9,7 @@ categories:
- managed-services
dates:
validation: 2024-04-22
-tags: iot iot-hub mqtt cli s3cmd s3
+tags: iot iot-hub mqtt cli s3cmd amazon-s3
---
Routes are integrations with the Scaleway ecosystem: they can forward MQTT messages to Scaleway services.
@@ -26,9 +26,9 @@ Routes are integrations with the Scaleway ecosystem: they can forward MQTT messa
- Installed the [Scaleway CLI](https://github.com/scaleway/scaleway-cli#scaleway-cli-v2) and [read the accompanying IoT document](/managed-services/iot-hub/api-cli/getting-started-with-iot-hub-cli/)
- Installed and configured [`s3cmd`](/tutorials/s3cmd/) for Scaleway
-## S3 Routes
+## Amazon S3 Routes
-The S3 route allows you to put the payload of MQTT messages directly into Scaleway's Object Storage.
+The Amazon S3 route allows you to put the payload of MQTT messages directly into Scaleway's Object Storage.
This section is a continuation of the [Iot Hub CLI quickstart](/managed-services/iot-hub/api-cli/getting-started-with-iot-hub-cli/). Make sure to follow the quickstart before beginning.
@@ -41,9 +41,9 @@ The S3 route allows you to put the payload of MQTT messages directly into Scalew
PREFIX="iot/messages"
# Create the bucket
s3cmd mb --region "$REGION" "s3://$BUCKET"
- # Grant write access to IoT Hub S3 Route Service to your bucket
+ # Grant write access to IoT Hub Amazon S3 Route Service to your bucket
s3cmd setacl --region "$REGION" "s3://$BUCKET" --acl-grant=write:555c69c3-87d0-4bf8-80f1-99a2f757d031:555c69c3-87d0-4bf8-80f1-99a2f757d031
- # Create the IoT Hub S3 Route
+ # Create the IoT Hub Amazon S3 Route
scw iot route create \
hub-id=$(jq -r '.id' hub.json) \
name=route-s3-cli topic="hello/world" \
diff --git a/managed-services/iot-hub/concepts.mdx b/managed-services/iot-hub/concepts.mdx
index ff81fc3302..f18f18d565 100644
--- a/managed-services/iot-hub/concepts.mdx
+++ b/managed-services/iot-hub/concepts.mdx
@@ -96,7 +96,7 @@ Increasing the QoS level decreases message throughput because of the additional
## Routes
-IoT Routes forward messages to non publish/subscribe destinations such as databases, REST APIs, Serverless functions and S3 buckets. See [Understanding Routes](/managed-services/iot-hub/reference-content/routes/) for further information.
+IoT Routes forward messages to non publish/subscribe destinations such as databases, REST APIs, Serverless functions and Object Storage buckets. See [Understanding Routes](/managed-services/iot-hub/reference-content/routes/) for further information.
## TLS
diff --git a/managed-services/iot-hub/how-to/understand-event-messages.mdx b/managed-services/iot-hub/how-to/understand-event-messages.mdx
index 46baac3f28..8aee54ccf3 100644
--- a/managed-services/iot-hub/how-to/understand-event-messages.mdx
+++ b/managed-services/iot-hub/how-to/understand-event-messages.mdx
@@ -60,9 +60,9 @@ This section shows you the types of message that can be received in IoT Hub Even
## Route messages
-### S3 route errors
+### Amazon S3 route errors
- `"'BUCKET_NAME' s3 bucket write failed. Error HTTP_STATUS_CODE: ERROR_CODE (request-id: REQUEST_ID)"`:
- The route failed to write to the specified s3 bucket.
+ The route failed to write to the specified Object Storage bucket.
`BUCKET_NAME` is the name of the bucket route attempt to write to, `HTTP_STATUS_CODE` and `ERROR_CODE` are standard [S3 error codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList)
## Database errors
diff --git a/managed-services/iot-hub/reference-content/routes.mdx b/managed-services/iot-hub/reference-content/routes.mdx
index f454f94c51..d45c10046c 100644
--- a/managed-services/iot-hub/reference-content/routes.mdx
+++ b/managed-services/iot-hub/reference-content/routes.mdx
@@ -8,7 +8,7 @@ content:
excerpt: |
This page provides detailed information about Scaleway IoT Hub Routes.
totalTime: PT5M
-tags: iot iot-hub route s3 database postgres postgresql mysql rest api inference
+tags: iot iot-hub route amazon-s3 database postgres postgresql mysql rest api inference
dates:
validation: 2024-05-06
posted: 2021-08-31
diff --git a/managed-services/transactional-email/reference-content/smtp-configuration.mdx b/managed-services/transactional-email/reference-content/smtp-configuration.mdx
index 8a2b0be1a2..b6f5c11d61 100644
--- a/managed-services/transactional-email/reference-content/smtp-configuration.mdx
+++ b/managed-services/transactional-email/reference-content/smtp-configuration.mdx
@@ -35,7 +35,7 @@ Your Scaleway SMTP username is the Project ID of the Project in which the TEM do
Your password is the secret key of the API key of the project used to manage your TEM domain. Follow this procedure to [generate API keys for API and SMTP sending with IAM](https://www.scaleway.com/en/docs/managed-services/transactional-email/how-to/generate-api-keys-for-tem-with-iam/).
-4 - **Encryption method** - If you want to encrypt the connection between your application and the SMTP server, there are usually two methods available:
+4 - **Encryption method** - An encrypted connection between your application and the SMTP server is **mandatory**, and two methods are available:
- **SSL/TLS**: Also known as SMTPS, it allows you to directly define a secure connection on a secure port. Directly creates a secure connection on a port such as `465` and `2465`.
- **STARTTLS**: This type will upgrade any insecure connections to secure connections on a non-secure port, such as 587.
diff --git a/managed-services/transactional-email/reference-content/tem-capabilities-and-limits.mdx b/managed-services/transactional-email/reference-content/tem-capabilities-and-limits.mdx
index dbc78ad6be..00e988ada6 100644
--- a/managed-services/transactional-email/reference-content/tem-capabilities-and-limits.mdx
+++ b/managed-services/transactional-email/reference-content/tem-capabilities-and-limits.mdx
@@ -7,33 +7,45 @@ content:
paragraph: Understand the capabilities and limits of Scaleway Transactional Email.
tags: transactional email-capabilities transactional-email quotas
dates:
- validation: 2024-05-21
+ validation: 2024-10-17
posted: 2022-11-07
categories:
- managed-services
---
-This page provides information about the capabilities and limits of Scaleway Transactional Email service, depending on user quotas.
+Scaleway's Transactional Email service allows users to send automated emails triggered by specific actions like account notifications, password resets, and order confirmations. This service is optimized for high-volume email sending, ensuring fast and reliable delivery of critical messages to end users.
-Every [Organization](/identity-and-access-management/iam/concepts/#organization) has quotas, which are limits on the number of Scaleway resources they can use. Below is a list of basic quotas available for Transactional Email.
+The service is built on scalable infrastructure capable of handling substantial email volumes. However, sending capacities are evaluated on a case-by-case basis, depending on factors such as:
-
- - Additional quotas can be added on a case-by-case basis. If you have already validated your payment method and your identity and want to increase your quota beyond the values shown on this page, [contact our support team](https://console.scaleway.com/support/create)
- - Starting from December 1st 2023, Transactional Email no longer applies an hourly quota for your email sending. Find out more about Transactional Email's pricing on the [product pricing page](https://www.scaleway.com/en/pricing/?tags=available,managedservices-transactionalemail-transactionalemail).
+- Usage patterns
+- Sender reputation
+- Adherence to best practices
+
+New users may face stricter initial limits, while well-known customers can benefit from higher limits, potentially sending up to millions of emails. These quotas are subject to validation by the Scaleway support team.
+
+
+ Customers must adhere to industry standards and our [General Terms of Services](https://www-uploads.scaleway.com/General_Terms_of_Services_v17072024_45d4879c08.pdf), including compliance with [Scaleway's anti-spam policy](https://tem.s3.fr-par.scw.cloud/antispam_policy.pdf). Failure to comply may result in temporary or permanent restrictions to the service.
+## Transactional Email service quotas
-| | [Payment method validated](/console/billing/how-to/add-payment-method/#how-to-add-a-credit-card) | Payment method and [identity validated](/console/account/how-to/verify-identity/) |
-|-------------------------------------|-------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|
-| Max. of recipients | 3 | 3 |
-| Domains per Organization | 2 | 5 |
-| Emails sent per month | 500 | 5000 |
-| Max. of attachments | 2 | 2 |
-| Max. size of an email (Mo) for API | 2* | 2* |
-| Max. size of an email (Mo) for SMTP | 2* | 2* |
+| Quota | Default Quota | Maximum Quota | Upgradable |
+|------------------------------------------------|-------------------|-------------------|----------------|
+| Maximum number of emails per month | 10,000 | Unlimited | Yes |
+| Maximum number of attachments per email | 10 | Custom defined | Yes |
+| Maximum number of recipients per email | 10 | Custom defined | Yes |
+| Maximum email size (API) | 2 MB* | 2 MB* | No |
+| Maximum email size (SMTP) | 50 MB* | Custom defined | Yes |
+
+*Including the email and all attachments in the email.
+
+
+ Quotas can be increased on a case-by-case basis. If you have validated your payment method and identity and want to increase your limits beyond the values shown, [contact our support team](https://console.scaleway.com/support/create).
+
-*Including the email and the attachments in the email.
+As of **December 1st, 2023**, the Transactional Email service no longer applies an hourly quota for email sending.
+For more information on pricing, visit the [Transactional Email pricing page](https://www.scaleway.com/fr/tarifs/managed-services/#transactional-email).
### Types of attachments available
diff --git a/managed-services/webhosting/concepts.mdx b/managed-services/webhosting/concepts.mdx
index 72720dcc67..d74f3805a6 100644
--- a/managed-services/webhosting/concepts.mdx
+++ b/managed-services/webhosting/concepts.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains all the concepts related to Scaleway’s Web Hosting service
tags: managed-services webhosting
dates:
- validation: 2024-04-22
+ validation: 2024-10-28
categories:
- managed-services
---
diff --git a/managed-services/webhosting/how-to/manage-databases.mdx b/managed-services/webhosting/how-to/manage-databases.mdx
new file mode 100644
index 0000000000..cfb3789f86
--- /dev/null
+++ b/managed-services/webhosting/how-to/manage-databases.mdx
@@ -0,0 +1,70 @@
+---
+meta:
+ title: How to manage databases
+ description: Discover how to manage databases for Scaleway Web Hosting plans from the console.
+content:
+ h1: How to manage databases
+ paragraph: Discover how to manage databases for Scaleway Web Hosting plans from the console.
+tags: webhosting
+dates:
+ validation: 2024-10-24
+ posted: 2024-10-24
+categories:
+ - managed-services
+---
+
+You can create and manage databases for your website and applications, including user creation and password updates directly from the Scaleway console.
+
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- [Registered a domain name](/network/domains-and-dns/how-to/register-internal-domain/) at Scaleway or another registrar
+- A Web Hosting plan
+
+
+ This guide focuses on managing your databases through the Scaleway console. For advanced configurations, you can access your [Web Hosting control panel (cPanel or Plesk)](/managed-services/webhosting/quickstart/#how-to-access-the-web-hosting-control-panel-from-the-scaleway-console).
+
+
+## How to create a database
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Databases** tab to display information related to your databases.
+4. Click **Create database** in the **Database** section of the page. A pop-up displays.
+5. Enter a name for the database, select an existing database user from the drop-down list or create a new one by entering a username and password. Click **Create database** to create the database.
+6. Enter the username and password for your FTP account. Then click **Create FTP account** to submit the form and create the account.
+
+## How to delete a database
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Databases** tab to display information related to your databases.
+4. Click next to the database you want to delete. A pop-up displays.
+5. Click **Delete database** to confirm the action and delete the database.
+
+## How to create a database user
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Databases** tab to display information related to your databases.
+4. Click **Create database user** in the **Database users** section of the page to create a new one. A pop-up displays.
+5. Enter a username and password. Then click **Create database user** to create the user.
+
+## How to update the password of a database user
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Databases** tab to display information related to your databases.
+4. Click > **Change password** next to the database user whose password you want to change. A pop-up displays.
+5. Enter the new password and click **Update database user** to submit the form and update the password.
+
+
+## How to delete a database user
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Databases** tab to display information related to your databases.
+4. Click > **Delete** next to the database user you want to delete. A pop-up displays.
+5. Click **Delete database user** to confirm the action and delete the user.
diff --git a/managed-services/webhosting/how-to/manage-email-accounts.mdx b/managed-services/webhosting/how-to/manage-email-accounts.mdx
new file mode 100644
index 0000000000..81885c84f9
--- /dev/null
+++ b/managed-services/webhosting/how-to/manage-email-accounts.mdx
@@ -0,0 +1,60 @@
+---
+meta:
+ title: How to manage email accounts
+ description: Discover how to manage email accounts for Scaleway Web Hosting plans from the console.
+content:
+ h1: How to manage email accounts
+ paragraph: Discover how to manage email accounts for Scaleway Web Hosting plans from the console.
+tags: webhosting
+dates:
+ validation: 2024-10-24
+ posted: 2024-10-24
+categories:
+ - managed-services
+---
+
+Send, receive, and store electronic messages through the internet. Email accounts can be accessed via POP3 and IMAP protocols and managed from the Scaleway console.
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- [Registered a domain name](/network/domains-and-dns/how-to/register-internal-domain/) at Scaleway or another registrar
+- A Web Hosting plan
+
+
+ This guide focuses on managing your email accounts through the Scaleway console. For advanced configurations, you can access your [Web Hosting control panel (cPanel or Plesk)](/managed-services/webhosting/quickstart/#how-to-access-the-web-hosting-control-panel-from-the-scaleway-console).
+
+
+## How to create an email account
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Emails** tab to display information related to your email accounts.
+4. Click **Create email account** to create a new one. A pop-up displays.
+5. Select the domain you want to associate the email account with from the drop-down list, then enter a username and password.
+
+ The username is the part of your email address in front of the @.
+
+6. Click **Create email account** to confirm the action and create the account.
+
+
+ To access webmail for the email account, click **Access webmail** next to the email account.
+
+
+
+## How to update the password of an email account
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Emails** tab to display information related to your email accounts.
+4. Click > **Change password** next to the email account whose password you want to change. A pop-up displays.
+5. Enter the new password and click **Change password** to submit the form and update the password.
+
+## How to delete an email account
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **Emails** tab to display information related to your email accounts.
+4. Click > **Delete** next to the email account you want to delete. A pop-up displays.
+5. Type **DELETE** in the pop-up and click **Delete email account** to confirm the action and delete the account.
diff --git a/managed-services/webhosting/how-to/manage-ftp-accounts.mdx b/managed-services/webhosting/how-to/manage-ftp-accounts.mdx
new file mode 100644
index 0000000000..36c789ef8f
--- /dev/null
+++ b/managed-services/webhosting/how-to/manage-ftp-accounts.mdx
@@ -0,0 +1,53 @@
+---
+meta:
+ title: How to manage FTP accounts
+ description: Discover how to manage FTP accounts for Scaleway Web Hosting plans from the console.
+content:
+ h1: How to manage FTP accounts
+ paragraph: Discover how to manage FTP accounts for Scaleway Web Hosting plans from the console.
+tags: webhosting
+dates:
+ validation: 2024-10-24
+ posted: 2024-10-24
+categories:
+ - managed-services
+---
+
+FTP (File Transfer Protocol) is used to tranfer data from your computer to your Web Hosting account and vice versa. This allows you to manage the content of your website.
+
+You can create and manage FTP accounts directly from the Scaleway console.
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- [Registered a domain name](/network/domains-and-dns/how-to/register-internal-domain/) at Scaleway or another registrar
+- A Web Hosting plan
+
+
+ This guide focuses on managing your FTP accounts through the Scaleway console. For advanced configurations, you can access your [Web Hosting control panel (cPanel or Plesk)](/managed-services/webhosting/quickstart/#how-to-access-the-web-hosting-control-panel-from-the-scaleway-console).
+
+
+## How to create an FTP account
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **FTP** tab to display information related to your FTP accounts.
+4. Click **Create FTP account** to create a new one. A pop-up displays.
+5. Enter the username and password for your FTP account. Then click **Create FTP account** to submit the form to create the account.
+
+## How to update the password of an FTP account
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **FTP** tab to display information related to your FTP accounts.
+4. Click > **Change password** next to the FTP account whose password you want to change. A pop-up displays.
+5. Enter the new password and click **Change password** to submit the form to update the password.
+
+## How to delete an FTP account
+
+1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+3. Click the **FTP** tab to display information related to your FTP accounts.
+4. Click > **Delete** next to the FTP account you want to delete. A pop-up displays.
+5. Click **Delete FTP account** to confirm the action and delete the FTP account.
diff --git a/managed-services/webhosting/quickstart.mdx b/managed-services/webhosting/quickstart.mdx
index 089e71f6a8..43564f8e02 100644
--- a/managed-services/webhosting/quickstart.mdx
+++ b/managed-services/webhosting/quickstart.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page shows you how to get started with Web Hosting.
tags: webhosting cpanel
dates:
- validation: 2024-07-03
+ validation: 2024-10-24
posted: 2021-05-26
categories:
- webhosting
@@ -46,6 +46,10 @@ Scaleway provides Web Hosting plans with [cPanel](/managed-services/webhosting/r
## How to access the Web Hosting control panel from the Scaleway console
+
+ You can manage your [email accounts](/managed-services/webhosting/how-to/manage-email-accounts/), [databases](/managed-services/webhosting/how-to/manage-databases/), and [FTP accounts](/managed-services/webhosting/how-to/manage-ftp-accounts/) directly from the Scaleway console.
+
+
1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
@@ -130,6 +134,21 @@ Scaleway's Web Hosting control panels are a multi-language solution and you can
## How to create a mailbox
+
+ 1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+ 2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+ 3. Click the **Emails** tab to display information related to your email accounts.
+ 4. Click **Create email account** to create a new one. A pop-up displays.
+ 5. Select the domain you want to associate the email account with from the drop-down list, then enter a username and password.
+
+ The username is the part of your email address in front of the @.
+
+ 6. Click **Create email account** to confirm the action and create the account.
+
+
+ To access webmail for the email account, click **Access webmail** next to the email account.
+
+
1. Open the [Web Hosting control panel](#how-to-access-the-web-hosting-control-panel-from-the-scaleway-console) and log in using your panel user and password. The Web Hosting panel dashboard displays.
2. Click **Email accounts** in the **Email** section of the dashboard. A list of your mailboxes displays.
@@ -171,6 +190,12 @@ Scaleway's Web Hosting control panels are a multi-language solution and you can
You can access the webmail platform for your Web Hosting directly from your Scaleway console.
+
+ 1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
+ 2. Click or the domain name of the Web Hosting service you want to configure. The **Hosting information** page displays.
+ 3. Click the **Emails** tab to display information related to your email accounts.
+ 4. Click **Access webmail** next to the email address you want to access. The webmail interface displays in a new browser tab.
+
1. Click **Web Hosting** in the **Managed Services** section of the [console](https://console.scaleway.com/) side menu. The **Web Hosting** overview page displays.
2. Click or the domain name of the Web Hosting plan you want to configure. The **Hosting information** page displays.
diff --git a/managed-services/webhosting/reference-content/php-version-overview.mdx b/managed-services/webhosting/reference-content/php-version-overview.mdx
new file mode 100644
index 0000000000..7a90107b3d
--- /dev/null
+++ b/managed-services/webhosting/reference-content/php-version-overview.mdx
@@ -0,0 +1,40 @@
+---
+meta:
+ title: PHP versions on Scaleway Web Hosting platforms
+ description: This page provides useful information about supported PHP versions across the different Web Hosting infrastructures.
+content:
+ h1: PHP versions on Scaleway Web Hosting platforms
+ paragraph: This page provides useful information about supported PHP versions across the different Web Hosting infrastructures.
+tags: webhosting php version
+dates:
+ validation: 2024-10-24
+ posted: 2024-10-24
+categories:
+ - webhosting
+---
+
+
+Scaleway Web Hosting is based on different infrastructures, each with varying support for PHP versions.
+Below you find an overview of the available PHP versions for the existing infrastructures: [Dedibox Classic](/dedibox-console/classic-hosting/), [Dedibox cPanel](/dedibox-console/cpanel-hosting/), and [Scaleway Web Hosting](/managed-services/webhosting/) (cPanel and Plesk).
+
+| PHP Version | Dedibox Classic | Dedibox cPanel | Scaleway cPanel | Scaleway Plesk |
+|-----------------|--------------------|-------------------|---------------------|--------------------|
+| PHP 4.4 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 5.2 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 5.4 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 5.5 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 5.6 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 7.0 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 7.1 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 7.2 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 7.3 | ✔️ | ✔️ | ❌ | ❌ |
+| PHP 7.4 | ❌ | ✔️ | ✔️ | ❌ |
+| PHP 8.0 | ❌ | ✔️ | ✔️ | ❌ |
+| PHP 8.1 | ❌ | ✔️ | ✔️ | ✔️ |
+| PHP 8.2 | ❌ | ❌ | ✔️ | ✔️ |
+| PHP 8.3 | ❌ | ❌ | ✔️ | ✔️ |
+
+**Key:**
+- ✔️ = PHP version is supported
+- ❌ = PHP version is not supported
+
diff --git a/menu/navigation.json b/menu/navigation.json
index 9dd909be3d..b2644d999f 100644
--- a/menu/navigation.json
+++ b/menu/navigation.json
@@ -591,6 +591,10 @@
"label": "OpenAI API compatibility",
"slug": "openai-compatibility"
},
+ {
+ "label": "Support for function calling in Scaleway Managed Inference",
+ "slug": "function-calling-support"
+ },
{
"label": "Llama-3-8b-instruct model",
"slug": "llama-3-8b-instruct"
@@ -656,16 +660,24 @@
{
"items": [
{
- "label": "Query text models",
- "slug": "query-text-models"
+ "label": "Query language models",
+ "slug": "query-language-models"
},
{
"label": "Query embedding models",
"slug": "query-embedding-models"
},
+ {
+ "label": "Query vision models",
+ "slug": "query-vision-models"
+ },
{
"label": "Use structured outputs",
"slug": "use-structured-outputs"
+ },
+ {
+ "label": "Use function calling",
+ "slug": "use-function-calling"
}
],
"label": "How to",
@@ -807,6 +819,10 @@
{
"label": "I can not connect to my Mac mini through a remote SSH connection",
"slug": "cant-connect-using-ssh"
+ },
+ {
+ "label": "I can not create a new Apple Account on my Mac mini",
+ "slug": "cant-create-apple-account"
}
],
"label": "Troubleshooting",
@@ -1341,6 +1357,14 @@
"label": "Understanding the differences between ARM and x86 Instances",
"slug": "understanding-differences-x86-arm"
},
+ {
+ "label": "Understanding QEMU Guest Agent",
+ "slug": "understanding-qemu-guest-agent"
+ },
+ {
+ "label": "Understanding automatic network hot-reconfiguration",
+ "slug": "understanding-automatic-network-hot-reconfiguration"
+ },
{
"label": "Understanding Instance pricing",
"slug": "understanding-instance-pricing"
@@ -1654,10 +1678,6 @@
"label": "Monitoring clusters",
"slug": "cluster-monitoring"
},
- {
- "label": "Managing storage",
- "slug": "managing-storage"
- },
{
"label": "Managing tags",
"slug": "managing-tags"
@@ -1708,6 +1728,10 @@
"label": "Exposing Kubernetes services to the internet",
"slug": "exposing-services"
},
+ {
+ "label": "Modifying kernel parameters in a Kubernetes cluster using a DaemonSet",
+ "slug": "modifying-kernel-parameters-kubernetes-cluster"
+ },
{
"label": "Moving Kubernetes nodes to routed IPs",
"slug": "move-kubernetes-nodes-routed-ip"
@@ -2732,6 +2756,18 @@
"label": "Manage a Web Hosting plan",
"slug": "manage-webhosting"
},
+ {
+ "label": "Manage FTP accounts",
+ "slug": "manage-ftp-accounts"
+ },
+ {
+ "label": "Manage databases",
+ "slug": "manage-databases"
+ },
+ {
+ "label": "Manage email accounts",
+ "slug": "manage-email-accounts"
+ },
{
"label": "Order a dedicated IP for Web Hosting",
"slug": "order-dedicated-ip"
@@ -2772,6 +2808,10 @@
"label": "Plesk additional content",
"slug": "plesk-reference-content"
},
+ {
+ "label": "PHP versions on Scaleway Web Hosting platforms",
+ "slug": "php-version-overview"
+ },
{
"label": "Web Hosting Classic migration - Technical information",
"slug": "classic-hosting-migration-information"
@@ -3897,6 +3937,10 @@
"label": "Manage the scheduling of a job",
"slug": "manage-job-schedule"
},
+ {
+ "label": "Reference secrets in a job",
+ "slug": "reference-secret-in-job"
+ },
{
"label": "Delete a job",
"slug": "delete-job"
@@ -4397,7 +4441,7 @@
"slug": "optimize-object-storage-performance"
},
{
- "label": "Equivalence between S3 actions and IAM permissions",
+ "label": "Equivalence between Object Storage actions and IAM permissions",
"slug": "s3-iam-permissions-equivalence"
}
],
diff --git a/network/domains-and-dns/concepts.mdx b/network/domains-and-dns/concepts.mdx
index 5688589fd1..4e70790833 100644
--- a/network/domains-and-dns/concepts.mdx
+++ b/network/domains-and-dns/concepts.mdx
@@ -7,12 +7,16 @@ content:
paragraph: Discover concepts related to Scaleway's Domains and DNS service. Learn about DNS namespaces, name servers, domain names, DNS resolution, records, zones, and more.
tags: domains domain dns namespace dns-zone nameserver zone-file reverse-dns root-server
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
categories:
- network
---
-**DNS namespace**
+## DNS
+
+**D**omain **N**ame **S**ystem is a name management system for computing devices connected to a network, be it public (internet) or private. It translates text-based [domain names](#domain-name) to numerical IP addresses or other services such as emails.
+
+## DNS namespace
DNS domains are all organized in a hierarchy called the DNS namespace. The hierarchy consists of:
@@ -22,24 +26,12 @@ DNS domains are all organized in a hierarchy called the DNS namespace. The hiera
-**DNS name server**
+## DNS name server
A DNS name server stores the [DNS Records](#dns-record) for given domains. Scaleway has its own name servers for its managed domains.
-## Domain name
-
-A **domain name** or **domain** is a unique alphanumeric name used to identify a computer (web or email server) on the internet. It translates the numeric address of the computer to a more legible human-readable and memorable name. A domain can consist of a single [DNS Zone](#dns-zone) or be divided into several zones.
-
-## Domain name resolution
-
-Domain name resolution refers to the process by which human-readable domain names, like `www.mydomain.com`, are translated into the numerical IP addresses that computers and servers use to communicate on the internet.
-
-## DNS
-
-The **D**omain **N**ame **S**ystem is a name management system for computing devices connected to a network, be it public (Internet) or private. It translates text-based [domain names](#domain-name) to numerical IP addresses or other services such as emails.
-
## DNS record
A [DNS](#dns) Record holds information translating a domain or subdomain to an IP address, mail server or other domain/subdomain. DNS records for each [DNS Zone](#dns-zone) are stored within files called [DNS zone files](#dns-zone-file). These are hosted on [DNS nameservers](#dns-name-server). DNS records act as instructions for the DNS servers, so they know which domain names and IP addresses are associated with each other. DNS records can be of multiple types, called [resource records](#resource-records). Check out our documentation on [how to manage DNS records](/network/domains-and-dns/how-to/manage-dns-records/).
@@ -52,6 +44,14 @@ A DNS zone hosts the DNS records for a distinct part of the global domain namesp
A DNS zone file describes a [DNS Zone](#dns-zone), containing DNS records which constitute mappings between domain names, IP addresses and other resources.
+## Domain name
+
+A **domain name** or **domain** is a unique alphanumeric name used to identify a computer (web or email server) on the internet. It translates the numeric address of the computer to a more legible human-readable and memorable name. A domain can consist of a single [DNS Zone](#dns-zone) or be divided into several zones.
+
+## Domain name resolution
+
+Domain name resolution refers to the process by which human-readable domain names, like `www.mydomain.com`, are translated into the numerical IP addresses that computers and servers use to communicate on the internet.
+
## External domain
An external domain is any domain created via an external registrar (i.e. not Scaleway). You can manage DNS zones for external domains from the Scaleway console.
@@ -65,7 +65,7 @@ An FQDN consists of a [hostname](#hostname), a [subdomain](#subdomain), a domain
## Hostname
-
+
When looking at a [fully qualified domain name](#fully-qualified-domain-name-(fqdn)), the hostname usually comes before the domain name or the subdomain. A hostname is a label or name assigned to a computer, device, or server on a network. It helps identify and locate a specific computer or any device connected to a network among all the others.
An example of a hostname can be `www` for the fully qualified domain name `www.mydomain.com.`.
diff --git a/network/domains-and-dns/how-to/manage-dns-records.mdx b/network/domains-and-dns/how-to/manage-dns-records.mdx
index 5b6facf21e..effba2658e 100644
--- a/network/domains-and-dns/how-to/manage-dns-records.mdx
+++ b/network/domains-and-dns/how-to/manage-dns-records.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn how to manage DNS records effectively with Scaleway Domains and DNS. Discover how to add, edit, and delete DNS records, along with advanced configurations like dynamic records for traffic management and Geo IP for optimizing user experience based on location.
tags: txt-record mx-record dns-record dns domain records
dates:
- validation: 2024-04-25
+ validation: 2024-10-29
posted: 2022-10-31
categories:
- network
@@ -24,9 +24,10 @@ categories:
1. Click **Domains and DNS** in the **Network** section of the [Scaleway console](https://console.scaleway.com) side menu.
2. Click the domain you want to manage. The domain's **Overview** page displays.
3. Click the **DNS zones** tab. A list of the DNS zones you have configured within the selected domain displays.
-4. Click **+ Add records** to add new records to your DNS zone. A pop-up displays.
-5. Fill in the required information for the record.
-6. Click **Add Records** to confirm.
+4. Click the DNS zone you want to add a record in.
+5. Click **+ Add records**. A pop-up displays.
+6. Fill in the required information for the record.
+7. Click **Add records** to confirm.
## How to edit DNS records
diff --git a/network/domains-and-dns/reference-content/understanding-domains-and-dns.mdx b/network/domains-and-dns/reference-content/understanding-domains-and-dns.mdx
index bac04d6e54..58277e1418 100644
--- a/network/domains-and-dns/reference-content/understanding-domains-and-dns.mdx
+++ b/network/domains-and-dns/reference-content/understanding-domains-and-dns.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn about domain management, DNS zones, and the advantages of utilizing subdomains.
tags: domains dns subdomain zone
dates:
- validation: 2024-04-25
+ validation: 2024-10-29
posted: 2023-04-12
categories:
- network
@@ -21,7 +21,7 @@ A domain name is an identification string that defines a realm of administrative
Domains are further divided into subdomains, that become DNS zones with their own set of administrators and DNS servers.
-The term domain is used in the business functions of the entity assigned to it and the term zone is usually used for configuration of DNS services.
+The term domain is used in the business functions of the entity assigned to it and the term zone is usually used for the configuration of DNS services.
## Example
@@ -69,4 +69,4 @@ An internationalized domain name (IDN) is an internet domain name that contains
Example: `allélua.com` converted in IDN is `xn--alllua-dva.com`.
-To simplify its use, the Domains and DNS API uses `unicode` (`UTF-8`) for name and data fields.
\ No newline at end of file
+To simplify its use, the [Domains and DNS API](https://www.scaleway.com/en/developers/api/domains-and-dns/) uses `unicode` (`UTF-8`) for name and data fields.
\ No newline at end of file
diff --git a/network/edge-services/reference-content/ssl-tls-certificate.mdx b/network/edge-services/reference-content/ssl-tls-certificate.mdx
index 611338c16c..217c7d83c1 100644
--- a/network/edge-services/reference-content/ssl-tls-certificate.mdx
+++ b/network/edge-services/reference-content/ssl-tls-certificate.mdx
@@ -54,9 +54,9 @@ This is the hassle-free option if you do not want to create or manage your own S
You must ensure that you have correctly set the [CNAME record](/network/edge-services/reference-content/cname-record/) for your domain. Without having done this, the Let's Encrypt certificate option in the console will not be available. It is also important to check the CNAME is correctly set up so that the certificate is properly generated and reviewed.
-Note that you will not have access to the generated certificate itself in Secret Manager or elsewhere. It is ent pipelineirely generated and managed "behind the scenes", and is not configurable by the user. If you reset your domain, or delete your Edge Services, Scaleway automatically deletes the generated Let's Encrypt certificate.
+Note that you will not have access to the generated certificate itself in Secret Manager or elsewhere. It is entirely generated and managed "behind the scenes", and is not configurable by the user. If you reset your domain, or delete your Edge Services, Scaleway automatically deletes the generated Let's Encrypt certificate.
-### Troubleshooting
+### Troubleshooting Let's Encrypt certificate errors
#### Errors
@@ -65,7 +65,7 @@ If there is a problem generating your managed Let's Encrypt certificate, an erro
| Error | Solution |
| ------------------------------------------------------------------------|---------------------------------------------------------------------|
| Too many certificates already issued for this domain | Wait, before retrying. This error occurs when you hit the limit of generating 50 Let's Encrypt certificates in a rolling 7 day period for the same domain. |
-| Internal managed certificate error | [Open a support ticket](https://console.scaleway.com/support/tickets/create). There has been an unspecified error in generating a managed Let's Encrypt certificate for your subdomain. |
+| Internal managed certificate error | There has been an unspecified error in generating a managed Let's Encrypt certificate for your subdomain. Try [resetting your domain to the default endpoint](/network/edge-services/how-to/configure-custom-domain/#how-to-reset-your-customized-domain), and then recustomizing it again, to trigger generation of a new Let's Encrypt certificate. If that fails, [open a support ticket](https://console.scaleway.com/support/tickets/create). |
| Certificate cannot be renewed - Your CNAME record is no longer accurate | Your CNAME record has either been deleted or modified. Without a correct CNAME record, we cannot renew your managed Let's Encrypt certificate. [Rectify your CNAME record](/network/edge-services/reference-content/cname-record/#how-to-create-a-cname-record), and when Edge Services detects the correct record exists, your certificate will be automatically renewed. |
## Using your own certificate
@@ -194,7 +194,7 @@ If you change your customized subdomain to something new, you will need to gener
-### Troubleshooting
+### Troubleshooting certificate errors
#### Errors
diff --git a/network/load-balancer/concepts.mdx b/network/load-balancer/concepts.mdx
index 13fd072fa4..fa28cab6d8 100644
--- a/network/load-balancer/concepts.mdx
+++ b/network/load-balancer/concepts.mdx
@@ -159,7 +159,7 @@ See [balancing-methods](#balancing-methods).
Routes allow you to specify, for a given frontend, which of its backends it should direct traffic to. For [HTTP](#protocol) frontends/backends, routes are based on HTTP Host headers. For [TCP](#protocol) frontends/backends, they are based on **S**erver **N**ame **I**dentification (SNI). You can configure multiple routes on a single Load Balancer.
-## S3 failover
+## Object Storage failover
See [customized error page](#customized-error-page)
diff --git a/network/load-balancer/how-to/set-up-s3-failover.mdx b/network/load-balancer/how-to/set-up-s3-failover.mdx
index 9480263afd..7f1cdb3fcb 100644
--- a/network/load-balancer/how-to/set-up-s3-failover.mdx
+++ b/network/load-balancer/how-to/set-up-s3-failover.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: How to configure a customized error page
paragraph: This page explains how to configure a customized error page for your Load Balancer, using the Scaleway Object Storage Bucket Website feature
-tags: s3-failover s3 failover load-balancer object-storage bucket
+tags: s3-failover amazon-s3 failover load-balancer object-storage bucket
dates:
validation: 2024-05-26
posted: 2022-02-21
diff --git a/network/load-balancer/reference-content/configuring-backends.mdx b/network/load-balancer/reference-content/configuring-backends.mdx
index ea5bfd099a..39a86d75a1 100644
--- a/network/load-balancer/reference-content/configuring-backends.mdx
+++ b/network/load-balancer/reference-content/configuring-backends.mdx
@@ -159,7 +159,7 @@ Benefits of this feature include:
- Providing information on service status or maintenance
- Redirecting to a mirrored site or skeleton site
-Note that when entering the S3 link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.`https://my-bucket.s3-website.nl-ams.scw.cloud`). See our [dedicated documentation](/network/load-balancer/how-to/set-up-s3-failover/) for further help.
+Note that when entering the Object Storage link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.`https://my-bucket.s3-website.nl-ams.scw.cloud`). See our [dedicated documentation](/network/load-balancer/how-to/set-up-s3-failover/) for further help.
## Health checks
diff --git a/network/public-gateways/concepts.mdx b/network/public-gateways/concepts.mdx
index 40f1f32dc4..506135d558 100644
--- a/network/public-gateways/concepts.mdx
+++ b/network/public-gateways/concepts.mdx
@@ -18,6 +18,10 @@ The Public Gateway can advertise a default route to resources on an attached Pri
You can choose to activate the advertisement of the default route when attaching a Private Network to a Public Gateway. The default route is propagated through DHCP.
+
+After activating the default route, all outbound and inbound traffic for resources attached to the Private Network is directed through the Public Gateway. This includes SSH traffic destined for Instances, which means you will need to [manage SSH connections differently](/network/public-gateways/troubleshooting/cant-connect-to-instance-with-pn-gateway/).
+
+
## DHCP
DHCP was previously a functionality of Scaleway Public Gateways, but has now been moved and is integrated directly into Private Networks. [Read more about DHCP on Private Networks](/network/vpc/concepts#dhcp).
diff --git a/network/public-gateways/how-to/create-a-public-gateway.mdx b/network/public-gateways/how-to/create-a-public-gateway.mdx
index ef783408e4..4a5d4fd068 100644
--- a/network/public-gateways/how-to/create-a-public-gateway.mdx
+++ b/network/public-gateways/how-to/create-a-public-gateway.mdx
@@ -22,8 +22,6 @@ categories:
## How to create a Public Gateway
-## How to create a Public Gateway
-
1. Click **Public Gateways** in the **Network** section of the side menu.
2. Click **Create Public Gateway** to launch the Public Gateway creation wizard.
3. Complete the following steps in the wizard:
diff --git a/network/public-gateways/troubleshooting/cant-connect-to-instance-with-pn-gateway.mdx b/network/public-gateways/troubleshooting/cant-connect-to-instance-with-pn-gateway.mdx
index 25a5ee6e31..d7c14450e2 100644
--- a/network/public-gateways/troubleshooting/cant-connect-to-instance-with-pn-gateway.mdx
+++ b/network/public-gateways/troubleshooting/cant-connect-to-instance-with-pn-gateway.mdx
@@ -1,24 +1,31 @@
---
meta:
- title: I cannot connect to my Instance using SSH after attaching it to a Private Network which has a Public Gateway
+ title: I cannot connect to my Instance using SSH after attaching it to a Private Network with a Public Gateway
description: This page explains how troubleshoot connection problems after attaching an Instance to a Private Network which has a Public Gateway
content:
- h1: I cannot connect to my Instance using SSH after attaching it to a Private Network which has a Public Gateway
+ h1: I cannot connect to my Instance using SSH after attaching it to a Private Network with a Public Gateway
paragraph: This page explains how troubleshoot connection problems after attaching an Instance to a Private Network which has a Public Gateway
tags: troubleshoot error private-network private network vpc public-gateway
dates:
- validation: 2024-05-24
+ validation: 2024-10-21
posted: 2021-05-26
categories:
- network
---
-
+If you are having trouble [connecting to your Instance via SSH](/compute/instances/how-to/connect-to-instance/), when the Instance is attached to a Private Network which also has an attached Public Gateway, read on for help and solutions.
-- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+The action to take depends on whether:
-The action to take depends on whether the Private Network(s) your Instance is on have DHCP enabled, and whether your Public Gateway is set to advertise a default route (true by default).
+- The Private Network(s) attached to your Instance have [DHCP enabled](/network/vpc/how-to/activate-dhcp/), and
+- Your Public Gateway is set to [advertise a default route](/network/public-gateways/concepts/#default-route) (true by default).
-If it is not the case, disconnect the Instance from the Private Network, as there may be other factors impacting your Instance, like one of your Instances running a DHCP server.
+If the above two conditions are not true, there may be other factors impacting your Instance, like one of your Instances running a DHCP server. Try disconnecting and reconnecting the Instance from the Private Network.
-If DHCP is activated and your Public Gateway is set to advertise a default route, this is expected behavior as all the traffic towards your Instance now goes through the Public Gateway. To access your Instance using SSH, first create a static NAT association between a port of your Public Gateway (eg 2222) and the private IP assigned to your Instance, on the SSH port (22 by default). Then, SSH to the Public Gateway's IP on port 2222.
\ No newline at end of file
+If DHCP **is** activated and your Public Gateway **is** set to advertise a default route, not being able to connect to your Instance via SSH is **expected behavior**. All the traffic towards your Instance now goes through the Public Gateway.
+
+To access your Instance using SSH in this scenario, the recommended solution is to use [SSH bastion](/network/public-gateways/how-to/use-ssh-bastion/).
+
+
+SSH bastion is the recommended solution. For advanced users only, another manual workaround is to create a static NAT association between a port of your Public Gateway (e.g. `2222`) and the private IP assigned to your Instance, on the SSH port (`22` by default). Then, SSH to the Public Gateway's IP on port `2222`.
+
\ No newline at end of file
diff --git a/network/vpc/how-to/attach-resources-to-pn.mdx b/network/vpc/how-to/attach-resources-to-pn.mdx
index 08b6354dcd..2d9b13a684 100644
--- a/network/vpc/how-to/attach-resources-to-pn.mdx
+++ b/network/vpc/how-to/attach-resources-to-pn.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains how to attach resources to a Private Network in a Scaleway VPC
tags: regional-private-network private-network vpc virtual-private-cloud attach detach resources regional
dates:
- validation: 2024-04-02
+ validation: 2024-10-21
posted: 2023-03-21
categories:
- network
diff --git a/observability/cockpit/api-cli/configuring-grafana-agent.mdx b/observability/cockpit/api-cli/configuring-grafana-agent.mdx
index 18880ce763..d52d9934cb 100644
--- a/observability/cockpit/api-cli/configuring-grafana-agent.mdx
+++ b/observability/cockpit/api-cli/configuring-grafana-agent.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page provides information on how to configure the Grafana agent, push data sources, and visualize them in Grafana
tags: cockpit observability grafana-agent
dates:
- validation: 2024-04-02
+ validation: 2024-10-15
posted: 2023-01-10
categories:
- observability
@@ -15,6 +15,10 @@ categories:
This page explains how to configure the Grafana agent and the Zipkin collector to push your metrics, logs, and traces. You can use it to **push your data from Scaleway resources or external resources**.
+
+ [The Grafana agent has been deprecated by Grafana](https://grafana.com/docs/agent/latest/). Find out [how to configure Grafana Alloy](/observability/cockpit/how-to/send-metrics-with-grafana-alloy/#configuring-grafana-alloy) which is Grafana's new telemetry collector.
+
+
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
@@ -32,7 +36,6 @@ This page explains how to configure the Grafana agent and the Zipkin collector t
It is not currently possible to collect logs if you are using OSX.
-
1. [Create a token](/observability/cockpit/how-to/create-token/) and select the push permission for metrics, traces, and logs.
2. Create a folder where you will keep your configuration files.
3. Create a configuration file inside your folder and name it `config.yaml`. This file will contain the Grafana agent configuration.
@@ -263,6 +266,3 @@ This page explains how to configure the Grafana agent and the Zipkin collector t
Refer to the [Grafana documentation](https://grafana.com/docs/grafana/latest/panels-visualizations/visualizations/traces/#add-a-panel-with-tracing-visualizations) to learn more about how to visualize your traces.
-
-
-
diff --git a/observability/cockpit/index.mdx b/observability/cockpit/index.mdx
index e367704732..c9d673b66b 100644
--- a/observability/cockpit/index.mdx
+++ b/observability/cockpit/index.mdx
@@ -61,6 +61,11 @@ meta:
url="/observability/cockpit/how-to/send-metrics-with-grafana-alloy/"
label="Read more"
/>
+
@@ -68,7 +73,7 @@ meta:
productLogo="cli"
title="Cockpit API"
description="Manage Cockpit using the Scaleway API."
- url="https://www.scaleway.com/en/developers/api/cockpit/"
+ url="https://www.scaleway.com/en/developers/api/cockpit/regional-api/"
label="Go to Cockpit API"
/>
diff --git a/observability/cockpit/reference-content/cockpit-limitations.mdx b/observability/cockpit/reference-content/cockpit-limitations.mdx
index 0156aa125f..1c60d6e170 100644
--- a/observability/cockpit/reference-content/cockpit-limitations.mdx
+++ b/observability/cockpit/reference-content/cockpit-limitations.mdx
@@ -88,7 +88,7 @@ Refer to our [documentation on understanding Cockpit usage and pricing](/observa
| Messaging and Queuing SNS | **Integrated*** | Not integrated | Not integrated |
| Instances | **Integrated*** | Not integrated | Not integrated |
| Cockpit | **Integrated*** | Not integrated | Not integrated |
-| Object Storage | **Integrated*** | Planned | Not integrated |
+| Object Storage | **Integrated*** | **Integrated*** | Not integrated |
| Serverless Containers | **Integrated*** | **Integrated*** | Not integrated |
| Serverless Functions | **Integrated*** | **Integrated*** | Not integrated |
| Serverless Jobs | **Integrated*** | **Integrated*** | Not integrated |
diff --git a/serverless/containers/concepts.mdx b/serverless/containers/concepts.mdx
index f59a9ed48a..b5628f1d39 100644
--- a/serverless/containers/concepts.mdx
+++ b/serverless/containers/concepts.mdx
@@ -14,7 +14,14 @@ categories:
## Cold Start
-Cold start is the time a container Instance takes to handle a request when it is called for the first time.
+Cold start is the time a Container takes to handle a request when it is called for the first time.
+
+Startup process steps are:
+* Downloading the container image to our infrastructure
+* Starting the container. Optimize your container startup speed to minimize this step (e.g., avoid waiting for slow connections or downloading large objects at startup)
+* Waiting for the containeer to listen on the configured port.
+
+[How to reduce cold starts](/faq/serverless-containers/#how-can-i-reduce-the-cold-starts-of-serverless-containers)
## Concurrency
diff --git a/serverless/containers/how-to/add-trigger-to-a-container.mdx b/serverless/containers/how-to/add-trigger-to-a-container.mdx
index b2a0383771..ea2ebd38b0 100644
--- a/serverless/containers/how-to/add-trigger-to-a-container.mdx
+++ b/serverless/containers/how-to/add-trigger-to-a-container.mdx
@@ -7,7 +7,7 @@ content:
paragraph: How to add triggers to Scaleway Serverless Containers.
tags: containers
dates:
- validation: 2024-04-09
+ validation: 2024-10-15
posted: 2023-04-27
categories:
- serverless
@@ -44,7 +44,7 @@ You can create triggers on private containers, but to update the privacy of a co
SQS queues of the [Scaleway Messaging and Queuing](/serverless/messaging/quickstart/) product are compatible with Serverless Containers.
-The configuration of the queue retention may affect the behavior of the trigger. Refer to the [Considerations to configure event retention for SQS trigger inputs](/serverless/containers/reference-content/configure-trigger-inputs/) page for more information.
+The configuration of the queue retention can affect the behavior of the trigger. Refer to the [Considerations to configure event retention for SQS trigger inputs](/serverless/containers/reference-content/configure-trigger-inputs/) page for more information.
1. Click **Containers** in the **Serverless** section of the side menu. The containers page displays.
2. Click the relevant containers namespace.
@@ -85,7 +85,5 @@ NATS subjects of the [Scaleway Messaging and Queuing](/serverless/messaging/quic
The container will be invoked at the indicated time.
- Refer to our [cron schedules reference](/serverless/containers/reference-content/cron-schedules/) for more information.
+ Refer to the [cron schedules reference](/serverless/containers/reference-content/cron-schedules/) for more information.
-
-
diff --git a/serverless/containers/how-to/deploy-a-container-from-external-container-registry.mdx b/serverless/containers/how-to/deploy-a-container-from-external-container-registry.mdx
index c27e4e33d0..7f23fee3e2 100644
--- a/serverless/containers/how-to/deploy-a-container-from-external-container-registry.mdx
+++ b/serverless/containers/how-to/deploy-a-container-from-external-container-registry.mdx
@@ -20,6 +20,10 @@ A container is a package of software that includes all dependencies: code, runti
For now, Serverless Containers only supports public images.
+
+
+
+
- A Scaleway account logged into the [console](https://console.scaleway.com)
@@ -33,7 +37,7 @@ For now, Serverless Containers only supports public images.
4. Complete the following steps in the wizard:
- Select the **External** container registry.
- Enter the public container **image URL** provided by the external registry. For example:
- - `nginx:latest` to deploy the latest nginx image from [Docker Hub](https://hub.docker.com/search?q=)
+ - `nginx:latest` to deploy the latest nginx image from [Docker Hub](https://hub.docker.com/)
- `ghcr.io/namespace/image` to deploy an image from [GitHub Container Registry](https://github.com/features/packages)
- Choose the [port](/serverless/containers/concepts/#port) your container is listening on. We recommend configuring your container to listen on the `$PORT` environment variable.
- Choose a **name** for your container and, optionally, a **description**. The name must only contain alphanumeric characters and dashes.
@@ -44,6 +48,9 @@ For now, Serverless Containers only supports public images.
- Set your [scaling](/serverless/containers/concepts/#scaling) preferences, or leave them at default values. The Scaleway platform autoscales the number of available instances of your container to match the incoming load, depending on the settings you define here.
- Click **Advanced options** to define any [environment variables](/serverless/containers/concepts/#environment-variables) you want to inject into your container. For each environment variable, click **+Add variable** and enter the key/value pair.
- Add [secrets](/serverless/containers/concepts/#secrets) for your container. Secrets are environment variables which are injected into your container, but the values are not retained or displayed by Scaleway after initial validation.
+
+ Encode your environment variables and secrets to `base64` if they are too large, and contain carriage returns.
+
- Set the desired [privacy policy](/serverless/containers/concepts/#privacy-policy) for your container. This defines whether container invocation may be done anonymously (**public**) or only via an authentication mechanism provided by the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-containers/#authentication) (**private**).
- Set a custom [timeout](/serverless/containers/concepts/#timeout) for your container.
- Verify the **estimated cost**.
diff --git a/serverless/containers/how-to/deploy-a-container-from-scaleway-container-registry.mdx b/serverless/containers/how-to/deploy-a-container-from-scaleway-container-registry.mdx
index d195f6f20d..0c5fb5afd6 100644
--- a/serverless/containers/how-to/deploy-a-container-from-scaleway-container-registry.mdx
+++ b/serverless/containers/how-to/deploy-a-container-from-scaleway-container-registry.mdx
@@ -42,6 +42,9 @@ You can deploy a container from the [Scaleway Container Registry](/containers/co
- Set your [scaling](/serverless/containers/concepts/#scaling) preferences, or leave them at default values. The Scaleway platform autoscales the number of available instances of your container to match the incoming load, depending on the settings you define here.
- Click **Advanced options** to define any [environment variables](/serverless/containers/concepts/#environment-variables) you want to inject into your container. For each environment variable, click **+Add variable** and enter the key/value pair.
- Add [secrets](/serverless/containers/concepts/#secrets) for your container. Secrets are environment variables which are injected into your container, but the values are not retained or displayed by Scaleway after initial validation.
+
+ Encode your environment variables and secrets to `base64` if they are too large, and contain carriage returns.
+
- Set the desired [privacy policy](/serverless/containers/concepts/#privacy-policy) for your container. This defines whether container invocation may be done anonymously (**public**) or only via an authentication mechanism provided by the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-containers/#authentication) (**private**).
- Set a custom [timeout](/serverless/containers/concepts/#timeout) for your container.
- Verify the **estimated cost**.
diff --git a/serverless/containers/how-to/secure-a-container.mdx b/serverless/containers/how-to/secure-a-container.mdx
index 8164c90538..b29425faa3 100644
--- a/serverless/containers/how-to/secure-a-container.mdx
+++ b/serverless/containers/how-to/secure-a-container.mdx
@@ -57,7 +57,7 @@ secret:
Add the following [resource description](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/container) in Terraform:
-```
+```hcl
secret_environment_variables = { "key" = "secret" }
```
diff --git a/serverless/containers/quickstart.mdx b/serverless/containers/quickstart.mdx
index 38e3f03420..890f701770 100644
--- a/serverless/containers/quickstart.mdx
+++ b/serverless/containers/quickstart.mdx
@@ -34,7 +34,7 @@ You can deploy a container from the Scaleway Container Registry or any other pub
If you have no existing Serverless Containers resources in your current Project, the creation process will guide you through the creation of a namespace, and then a container.
- Make sure that you have [created a Container Registry namespace](/containers/container-registry/how-to/create-namespace/) and [pushed the latest NGINX Docker image](/containers/container-registry/how-to/push-images/) to it.
+ Make sure that you have [created a Container Registry namespace](/containers/container-registry/how-to/create-namespace/) and [pushed the latest NGINX Docker image](/containers/container-registry/how-to/push-images/) (or any other image with a web server) to it.
1. Click **Containers** in the **Serverless** section of the side menu. The containers page displays.
@@ -70,6 +70,10 @@ If you have no existing Serverless Containers resources in your current Project,
If you have no existing Serverless Containers resources in your current Project, the creation process will guide you through the creation of a namespace, and then a container.
+
+
+
+
1. Click **Containers** in the **Serverless** section of the side menu. The containers page displays.
2. Click **Deploy container**. The containers namespace creation wizard displays.
3. Complete the following steps in the wizard:
diff --git a/serverless/containers/troubleshooting/common-errors.mdx b/serverless/containers/troubleshooting/common-errors.mdx
index 83931eceb9..d2fe6930cc 100644
--- a/serverless/containers/troubleshooting/common-errors.mdx
+++ b/serverless/containers/troubleshooting/common-errors.mdx
@@ -56,3 +56,30 @@ The new deploy failed, and the [fallback mechanism has been triggered](/serverle
### Possible solution
Identify the element that caused the deployment to fail, fix the error, and deploy the container again.
+
+## Issues when retrieving an external image
+
+### Cause
+
+Serverless products support external public registries (such as [Docker Hub](https://hub.docker.com/)), but we do not recommend using them due to uncontrolled rate limiting, which can lead to failures when starting resources, unexpected usage conditions, and pricing changes.
+
+### Solution
+
+We recommend using [Scaleway's Container Registry](/containers/container-registry/) instead, as it allows for a seamless integration with Serverless Containers and Jobs at a [competitive price](/faq/containerregistry/#how-am-i-billed-for-scaleway-container-registry).
+
+## My environment variable or secret is not properly injected in my container
+
+### Cause
+
+Environment variables or secrets that are too large, contain carriage returns and spread over several lines, as shown below, will not be injected properly.
+
+```
+"hello
+world
+.
+"
+```
+
+### Solution
+
+To avoid issues while injecting environment variables and secrets, we recommend encoding them to `base64`.
\ No newline at end of file
diff --git a/serverless/functions/concepts.mdx b/serverless/functions/concepts.mdx
index 94a026da99..f15d497b0d 100644
--- a/serverless/functions/concepts.mdx
+++ b/serverless/functions/concepts.mdx
@@ -14,7 +14,14 @@ categories:
## Cold Start
-Cold start is the time a function Instance takes to handle a request when it is called for the first time.
+Cold Start is the time a Fuction takes to handle a request when it is called for the first time.
+
+Startup process steps are:
+* Downloading the container image (which contains the built Function) to our infrastructure
+* Starting the container and the runtime
+* Waiting for the container to be ready.
+
+[How to reduce cold starts](/faq/serverless-functions/#how-to-reduce-cold-start-of-serverless-functions)
## CRON trigger
diff --git a/serverless/functions/how-to/add-trigger-to-a-function.mdx b/serverless/functions/how-to/add-trigger-to-a-function.mdx
index 757524c873..aaddb08b43 100644
--- a/serverless/functions/how-to/add-trigger-to-a-function.mdx
+++ b/serverless/functions/how-to/add-trigger-to-a-function.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Learn how to add triggers to your Serverless Functions in Scaleway.
tags: functions
dates:
- validation: 2024-04-09
+ validation: 2024-10-15
posted: 2023-04-27
categories:
- serverless
@@ -82,7 +82,5 @@ NATS subjects of the [Scaleway Messaging and Queuing](/serverless/messaging/quic
8. Click **Create trigger** to launch trigger creation.
- Refer to our [cron schedules reference](/serverless/functions/reference-content/cron-schedules/) for more information.
+ Refer to the [cron schedules reference](/serverless/functions/reference-content/cron-schedules/) for more information.
-
-
diff --git a/serverless/functions/how-to/create-a-function.mdx b/serverless/functions/how-to/create-a-function.mdx
index c4070b4153..a99e6cdccc 100644
--- a/serverless/functions/how-to/create-a-function.mdx
+++ b/serverless/functions/how-to/create-a-function.mdx
@@ -41,6 +41,9 @@ This page shows you how to deploy a [function](/serverless/functions/concepts/#f
5. Click **+ Advanced options** and complete the following steps:
- Define any **environment variables** you want to inject into your function. For each environment variable, click **+ Add variable** and enter the key/value pair.
- Optionally, set secret environment variables. **Secrets** are environment variables which are injected into your function and stored securely, but not displayed in the console after initial validation. Add a **key** and a **value**.
+
+ Encode your environment variables and secrets to `base64` if they are too large, and contain carriage returns.
+
- Set the desired **privacy policy** for your function. This defines whether a function can be executed anonymously (**public**) or only via an authentication mechanism provided by the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-functions/#authentication) (**private**).
- Set the desired timeout for your function.
diff --git a/serverless/functions/how-to/package-function-dependencies-in-zip.mdx b/serverless/functions/how-to/package-function-dependencies-in-zip.mdx
index 3250c01264..cc7e88332e 100644
--- a/serverless/functions/how-to/package-function-dependencies-in-zip.mdx
+++ b/serverless/functions/how-to/package-function-dependencies-in-zip.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Package function dependencies in a ZIP file for Scaleway Serverless Functions.
tags: functions zip-file
dates:
- validation: 2024-04-16
+ validation: 2024-10-23
posted: 2021-05-26
categories:
- serverless
diff --git a/serverless/functions/how-to/test-a-function.mdx b/serverless/functions/how-to/test-a-function.mdx
index 66987120a5..be4f554437 100644
--- a/serverless/functions/how-to/test-a-function.mdx
+++ b/serverless/functions/how-to/test-a-function.mdx
@@ -38,5 +38,3 @@ This page shows you how to execute Serverless Functions from the [Scaleway conso
8. Click **Run**.
The **Output** section displays the response from your function and the status code.
-
-
diff --git a/serverless/functions/index.mdx b/serverless/functions/index.mdx
index 31272f77d7..4a5baab212 100644
--- a/serverless/functions/index.mdx
+++ b/serverless/functions/index.mdx
@@ -64,7 +64,7 @@ meta:
label="Read more"
/>
diff --git a/serverless/functions/reference-content/functions-handlers.mdx b/serverless/functions/reference-content/functions-handlers.mdx
index faa8c30732..c0ce632b2a 100644
--- a/serverless/functions/reference-content/functions-handlers.mdx
+++ b/serverless/functions/reference-content/functions-handlers.mdx
@@ -7,7 +7,7 @@ content:
paragraph: Discover how to implement function handlers for Serverless Functions in Scaleway.
tags: serverless functions cron crontab schedule cronjob
dates:
- validation: 2024-04-09
+ validation: 2024-10-15
posted: 2024-04-09
categories:
- serverless
@@ -81,4 +81,3 @@ categories:
-
diff --git a/serverless/functions/troubleshooting/common-errors.mdx b/serverless/functions/troubleshooting/common-errors.mdx
index e308928d89..f72bfef40a 100644
--- a/serverless/functions/troubleshooting/common-errors.mdx
+++ b/serverless/functions/troubleshooting/common-errors.mdx
@@ -108,3 +108,20 @@ The new deploy failed, and the [fallback mechanism has been triggered](/serverle
### Possible solution
Identify the element that caused the deployment to fail, fix the error, and deploy the function again.
+
+## My environment variable or secret is not properly injected in my function
+
+### Cause
+
+Environment variables or secrets that are too large, contain carriage returns and spread over several lines, as shown below, will not be injected properly.
+
+```
+"hello
+world
+.
+"
+```
+
+### Solution
+
+To avoid issues while injecting environment variables and secrets, we recommend encoding them to `base64`.
diff --git a/serverless/jobs/concepts.mdx b/serverless/jobs/concepts.mdx
index 39297a89a7..fcdcd4ec6b 100644
--- a/serverless/jobs/concepts.mdx
+++ b/serverless/jobs/concepts.mdx
@@ -53,6 +53,10 @@ The maximum duration option allows you to define the maximum execution time befo
A schedule (cron) is a mechanism used to automatically start a Serverless Job at a specific time on a recurring schedule. It works similarly to a traditional Linux cron job, using the `* * * * *` format. Refer to our [cron schedules reference](/serverless/jobs/reference-content/cron-schedules/) for more information.
+## Secrets reference
+
+A secret reference is a mechanism that allows you to use a secret stored in [Secret Manager](/identity-and-access-management/secret-manager/) within Serverless Jobs. It allows you to securely reference sensitive data, such as API secret keys, passwords, tokens, or certificates.
+
## Startup command
This optional field allows you to specify a custom command executed upon starting your job if your container image does not have one already, or if you use a public container image.
diff --git a/serverless/jobs/how-to/create-job-from-external-registry.mdx b/serverless/jobs/how-to/create-job-from-external-registry.mdx
index 6868d88319..effd243a84 100644
--- a/serverless/jobs/how-to/create-job-from-external-registry.mdx
+++ b/serverless/jobs/how-to/create-job-from-external-registry.mdx
@@ -14,10 +14,12 @@ categories:
- jobs
---
-Scaleway allows you to create jobs from external public [container registries](/containers/container-registry/concepts/#registry), such as Docker Hub, AWS container registries, GitLab container registry, etc.
+Scaleway Serverless Jobs allows you to create jobs from external public [container registries](/containers/container-registry/concepts/#registry), such as Docker Hub, AWS container registries, GitLab container registry, etc.
-
- Private container registries are currently not supported.
+Private external container registries are currently not supported.
+
+
+
@@ -37,6 +39,10 @@ Scaleway allows you to create jobs from external public [container registries](/
- Choose the **resources** to be allocated to your job at runtime. These define the performance characteristics of your job.
- Optionally, add a **cron schedule** in the `* * * * *` format, and select your time zone to run your job periodically. Refer to the [cron schedules documentation](/serverless/jobs/reference-content/cron-schedules/) for more information.
- Define any **environment variables** you want to inject into your job in the advanced options. For each environment variable, click **+Add new variable** and enter the key/value pair.
+
+ Encode your environment variables to `base64` if they are too large, and contain carriage returns.
+
+ - Add the desired [secret references](/serverless/jobs/how-to/reference-secret-in-job/) to your job.
- Add a **startup command** to your job. It will be executed every time your job is run.
- Set a **maximum duration** to your job to stop it automatically if it does not complete within this limit.
- Verify the **estimated cost**.
diff --git a/serverless/jobs/how-to/create-job-from-scaleway-registry.mdx b/serverless/jobs/how-to/create-job-from-scaleway-registry.mdx
index 58c6d3b043..15738c37b2 100644
--- a/serverless/jobs/how-to/create-job-from-scaleway-registry.mdx
+++ b/serverless/jobs/how-to/create-job-from-scaleway-registry.mdx
@@ -34,6 +34,10 @@ Scaleway's Serverless Jobs allows you to create jobs from several container [reg
- Choose the **resources** to be allocated to your job at runtime. These define the performance characteristics of your job.
- Add a **cron schedule** in the `* * * * *` format, and select your time zone to run your job periodically. Refer to the [cron schedules documentation](/serverless/jobs/reference-content/cron-schedules/) for more information.
- Define any **environment variables** you want to inject into your job in the advanced options. For each environment variable, click **+Add new variable** and enter the key/value pair.
+
+ Encode your environment variables to `base64` if they are too large, and contain carriage returns.
+
+ - Add the desired [secret references](/serverless/jobs/how-to/reference-secret-in-job/) to your job.
- Add a **startup command** to your job. It will be executed every time your job is run.
- Set a **maximum duration** to your job to stop it automatically if it does not complete within this limit.
- Verify the **estimated cost**.
diff --git a/serverless/jobs/how-to/reference-secret-in-job.mdx b/serverless/jobs/how-to/reference-secret-in-job.mdx
new file mode 100644
index 0000000000..b1fa7bd272
--- /dev/null
+++ b/serverless/jobs/how-to/reference-secret-in-job.mdx
@@ -0,0 +1,80 @@
+---
+meta:
+ title: How to reference secrets in Serverless Jobs
+ description: Steps to reference secrets from Secret Manager in your Serverless Jobs.
+content:
+ h1: How to reference secrets in Serverless Jobs
+ paragraph: Steps to reference secrets from Secret Manager in your Serverless Jobs.
+tags: serverless jobs secrets secret-manager environment-variable
+dates:
+ validation: 2024-10-27
+ posted: 2024-10-27
+categories:
+ - serverless
+ - jobs
+---
+
+Serverless Jobs seamlessly integrates with [Secret Manager](/identity-and-access-management/secret-manager/), which allows you to store, manage, and access sensitive information, such as credentials, SSH keys, SSL/TLS certificates, or any key/value pairs you need to secure.
+
+You can reference any secret stored in Secret Manager in a job, without having to hardcode any sensitive data.
+
+A [job run](/serverless/jobs/concepts/#job-run) accesses each secret at startup, and each access generates a call to the Secret Manager API, which is billed accordingly. Refer to the [Secret Manager pricing](/identity-and-access-management/secret-manager) for more information.
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- [Created a Serverless Job](/serverless/jobs/how-to/create-job-from-scaleway-registry/)
+- [Created a secret](/identity-and-access-management/secret-manager/how-to/create-secret/)
+
+## Reference a secret in a job
+
+1. Click **Jobs** in the **Serverless** section of the side menu. The jobs page displays.
+
+2. Click the name of the job to which you want to add a secret, then open the **Settings** tab.
+
+3. In the **Secrets references** section, click **+ Add secret reference**. A pop-up displays.
+
+4. Select the secret you want to reference, and the desired version, then click **Select reference method**.
+
+5. Select the desired reference method:
+
+ - **File**: copies the encrypted value of your secret to a file stored at the indicated location within your container. This method is recommended for large or complex data. For example, if your secret is a certificate, you can store it as a file in the `/my-certificates` folder in your container.
+
+ - **Environment variable**: passes the encrypted value of your secret to your job as a variable. This method is recommended for small pieces of information, such as passwords, or API secret keys. For example, if you name this variable `MY_SECRET`, calling `$MY_SECRET` in your container will return the value of the selected secret in a secure way.
+
+6. Click **Add reference** to add the secret to your Serverless Job. Optionally, tick the **Add another reference** to add a new secret right away, then repeat steps 4 to 6.
+
+The secret is now referenced in your Serverless Job, and can be used within the container.
+
+## Update a secret reference from a job
+
+1. Click **Jobs** in the **Serverless** section of the side menu. The jobs page displays.
+
+2. Click the name of the job for which you want to update a secret, then open the **Settings** tab.
+
+3. In the **Secret references** section, click the icon next to the secret reference you want to update. A pop-up displays.
+
+4. Update the secret version if needed, then click **Update** to save your changes, or click **Select reference method** to continue.
+
+5. Either update the location of the file, or the name of the environment variable, then click **Update reference** to confirm your changes.
+
+
+You cannot change the reference method of an existing secret. You have to delete the secret reference within the job first, then create it again with the desired reference method.
+
+
+## Delete a secret reference from a job
+
+1. Click **Jobs** in the **Serverless** section of the side menu. The jobs page displays.
+
+2. Click the name of the job for which you want to delete a secret, then open the **Settings** tab.
+
+3. In the **Secret references** section, click the icon next to the secret reference you want to delete. A confirmation pop-up displays.
+
+4. Click **Delete reference** to confirm.
+
+The secret is no longer referenced in your Serverless Job.
+
+
+Deleting a secret from the **Settings** tab of a job only deletes the secret reference, not the secret itself. To permanently delete a secret, follow [this procedure](/identity-and-access-management/secret-manager/how-to/delete-secret/).
+
\ No newline at end of file
diff --git a/serverless/jobs/quickstart.mdx b/serverless/jobs/quickstart.mdx
index 0dec7783cc..d575d0d01c 100644
--- a/serverless/jobs/quickstart.mdx
+++ b/serverless/jobs/quickstart.mdx
@@ -26,18 +26,17 @@ This page explains how to create a job definition with the latest Alpine Linux i
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- [Created a Container Registry namespace](/containers/container-registry/how-to/create-namespace/) and [pushed a container image](/containers/container-registry/how-to/push-images/) to it
## How to create a job definition
-To keep this quickstart simple, we will create a job from a public external registry. To create a job from the Scaleway Container Registry, refer to [this documentation](/serverless/jobs/how-to/create-job-from-scaleway-registry/).
-
1. Click **Jobs** in the **Serverless** section of the side menu. The Jobs page displays.
2. Click **+ Create job**.
3. Complete the following steps in the wizard:
- - Select the **External** container registry.
- - Enter `docker.io/library/alpine:latest` in the image URL field.
+ - Select the **Scaleway** Container Registry.
+ - Select the appropriate **Registry namespace** from the drop-down list, then select the desired **container** and **tag**.
- Enter a **name** or use the automatically generated one. The name can only contain lowercase alphanumeric characters and dashes.
- Enter a **description** (optional).
- Select the region in which your job will be created.
@@ -51,6 +50,10 @@ To keep this quickstart simple, we will create a job from a public external regi
6. Click **Create a job definition** to finish.
+
+
+
+
## How to run a job
1. Click **Jobs** in the **Serverless** section of the side menu. The jobs page displays.
@@ -59,10 +62,12 @@ To keep this quickstart simple, we will create a job from a public external regi
3. From the **Overview** tab, click **Run job**.
-The execution appears in the **Job runs** section of the **Overview** tab.
+ The execution appears in the **Job runs** section of the **Overview** tab.
+
+4. Click the icon next to the last execution in the **Job runs** section, then click **Logs** to access your job's logs.
- Refer to [How to monitor a job](/serverless/jobs/how-to/monitor-job/) to see the logs of the job you just executed.
+ Make sure that you [have retrieved your Grafana credentials](/observability/cockpit/how-to/retrieve-grafana-credentials/) before accessing your job's logs.
## How to delete a job
diff --git a/serverless/jobs/troubleshooting/common-errors.mdx b/serverless/jobs/troubleshooting/common-errors.mdx
index 09a1d162d7..a6d923a3ce 100644
--- a/serverless/jobs/troubleshooting/common-errors.mdx
+++ b/serverless/jobs/troubleshooting/common-errors.mdx
@@ -20,3 +20,30 @@ categories:
- Make sure you built your image for an `amd64` architecture, as `arm64` is not supported. See [Architecture](/serverless/jobs/reference-content/jobs-limitations/#Architecture) documentation.
- Make sure your deployment does not exceed the limitations of [Serverless Jobs](/serverless/jobs/reference-content/jobs-limitations/).
+
+## Issues when retrieving an external image
+
+### Cause
+
+Serverless products support external public registries (such as [Docker Hub](https://hub.docker.com/)), but we do not recommend using them due to uncontrolled rate limiting, which can lead to failures when starting resources, unexpected usage conditions, and pricing changes.
+
+### Solution
+
+We recommend using [Scaleway's Container Registry](/containers/container-registry/) instead, as it allows for a seamless integration with Serverless Containers and Jobs at a [competitive price](/faq/containerregistry/#how-am-i-billed-for-scaleway-container-registry).
+
+## My environment variable or secret is not properly injected in my job
+
+### Cause
+
+Environment variables or secrets that are too large, contain carriage returns and spread over several lines, as shown below, will not be injected properly.
+
+```
+"hello
+world
+.
+"
+```
+
+### Solution
+
+To avoid issues while injecting environment variables and secrets, we recommend encoding them to `base64`.
\ No newline at end of file
diff --git a/serverless/messaging/api-cli/connect-aws-cli.mdx b/serverless/messaging/api-cli/connect-aws-cli.mdx
index f865a7ab00..52c7e2e486 100644
--- a/serverless/messaging/api-cli/connect-aws-cli.mdx
+++ b/serverless/messaging/api-cli/connect-aws-cli.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-04
+validation_frequency: 8
---
The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services. With minimal configuration, you can start using the AWS-CLI with Scaleway Messaging and Queuing SQS and/or SNS. This allows you to create, list and manage your queues and topics, send messages and much more, all from your command line.
diff --git a/serverless/messaging/api-cli/nats-cli.mdx b/serverless/messaging/api-cli/nats-cli.mdx
index b013786dac..f98fe2ef1b 100644
--- a/serverless/messaging/api-cli/nats-cli.mdx
+++ b/serverless/messaging/api-cli/nats-cli.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-20
+validation_frequency: 8
---
The NATS CLI (`nats`) is the official NATS tool for managing your NATS resources. It allows you to simply create and manage your streams, consumers and more.
diff --git a/serverless/messaging/api-cli/python-node-sns.mdx b/serverless/messaging/api-cli/python-node-sns.mdx
index fd2b34dda7..37b900b061 100644
--- a/serverless/messaging/api-cli/python-node-sns.mdx
+++ b/serverless/messaging/api-cli/python-node-sns.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-04
+validation_frequency: 8
---
AWS provides a number of **S**oftware **D**evelopment **K**its (SDKs) which provide language-specific APIs for AWS services, including [SNS](/serverless/messaging/concepts/#sns).
diff --git a/serverless/messaging/api-cli/python-node-sqs.mdx b/serverless/messaging/api-cli/python-node-sqs.mdx
index 08dc496826..a3467542fc 100644
--- a/serverless/messaging/api-cli/python-node-sqs.mdx
+++ b/serverless/messaging/api-cli/python-node-sqs.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-04
+validation_frequency: 8
---
AWS provides a number of SDKs (**S**oftware **D**evelopment **K**its) which provide language-specific APIs for AWS services, including [SQS](/serverless/messaging/concepts#sqs).
diff --git a/serverless/messaging/api-cli/sqs-sns-aws-cli.mdx b/serverless/messaging/api-cli/sqs-sns-aws-cli.mdx
index 5c7ce1164b..3b89ef6a95 100644
--- a/serverless/messaging/api-cli/sqs-sns-aws-cli.mdx
+++ b/serverless/messaging/api-cli/sqs-sns-aws-cli.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-16
posted: 2023-04-04
+validation_frequency: 8
---
The AWS-CLI is an open-source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services. Once you have [connected Scaleway Messaging and Queuing SQS and/or SNS to the AWS-CLI](/serverless/messaging/api-cli/connect-aws-cli/), you can start creating, listing and managing your queues and topics, sending messages and much more, all from your command line.
diff --git a/serverless/messaging/how-to/monitor-mnq-cockpit.mdx b/serverless/messaging/how-to/monitor-mnq-cockpit.mdx
index 3359fc4cd5..fb5e6a80c6 100644
--- a/serverless/messaging/how-to/monitor-mnq-cockpit.mdx
+++ b/serverless/messaging/how-to/monitor-mnq-cockpit.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-09-07
+validation_frequency: 8
---
You can view your Messaging and Queuing metrics via [Scaleway Cockpit](/observability/cockpit/quickstart/). This allows you to monitor your queues/streams and messages at a glance. There are two steps to complete to view your Messaging and Queuing metrics for the first time with Cockpit:
diff --git a/serverless/messaging/reference-content/limitations.mdx b/serverless/messaging/reference-content/limitations.mdx
index 74d8937cb0..7faa70262d 100644
--- a/serverless/messaging/reference-content/limitations.mdx
+++ b/serverless/messaging/reference-content/limitations.mdx
@@ -9,6 +9,7 @@ tags: messaging limitations space size storage payload max-streams max-consumers
dates:
validation: 2024-04-19
posted: 2023-01-04
+validation_frequency: 8
categories:
- serverless
---
diff --git a/serverless/messaging/reference-content/nats-overview.mdx b/serverless/messaging/reference-content/nats-overview.mdx
index 373dc27e96..563ba97f16 100644
--- a/serverless/messaging/reference-content/nats-overview.mdx
+++ b/serverless/messaging/reference-content/nats-overview.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-04
+validation_frequency: 8
---
## What is NATS?
diff --git a/serverless/messaging/reference-content/sns-overview.mdx b/serverless/messaging/reference-content/sns-overview.mdx
index bec9dcf06f..4318f12b7d 100644
--- a/serverless/messaging/reference-content/sns-overview.mdx
+++ b/serverless/messaging/reference-content/sns-overview.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-04
+validation_frequency: 8
---
## What is SNS?
diff --git a/serverless/messaging/reference-content/sqs-overview.mdx b/serverless/messaging/reference-content/sqs-overview.mdx
index cd3cdcd732..01aa050265 100644
--- a/serverless/messaging/reference-content/sqs-overview.mdx
+++ b/serverless/messaging/reference-content/sqs-overview.mdx
@@ -11,6 +11,7 @@ categories:
dates:
validation: 2024-04-09
posted: 2023-01-04
+validation_frequency: 8
---
## What is SQS?
diff --git a/storage/object/api-cli/bucket-operations.mdx b/storage/object/api-cli/bucket-operations.mdx
index 1c8aeacfa2..29e67fb235 100644
--- a/storage/object/api-cli/bucket-operations.mdx
+++ b/storage/object/api-cli/bucket-operations.mdx
@@ -668,7 +668,7 @@ aws s3api put-bucket-versioning --bucket BucketName
## PutBucketPolicy
-This operation applies an S3 bucket policy to an S3 bucket.
+This operation applies an Object Storage bucket policy to an Object Storage bucket.
If the operation is successful, no output will be returned.
diff --git a/storage/object/api-cli/bucket-policy.mdx b/storage/object/api-cli/bucket-policy.mdx
index 8a29db9252..cfbb255055 100644
--- a/storage/object/api-cli/bucket-policy.mdx
+++ b/storage/object/api-cli/bucket-policy.mdx
@@ -362,7 +362,7 @@ Bucket policies use a JSON-based access policy language and are composed of stri
### Action
**Description**
-: Consists of an S3 namespace, a colon, and the name of an action. Action names can include wildcards represented by `*`.
+: Consists of an Amazon S3 namespace, a colon, and the name of an action. Action names can include wildcards represented by `*`.
**Required**
: Yes
@@ -451,7 +451,7 @@ Bucket policies use a JSON-based access policy language and are composed of stri
### Resource
**Description**
-: Consists in the S3 resource path.
+: Consists in the Amazon S3 resource path.
**Required**
: Yes
diff --git a/storage/object/api-cli/bucket-website-api.mdx b/storage/object/api-cli/bucket-website-api.mdx
index 537142da84..51852b3e07 100644
--- a/storage/object/api-cli/bucket-website-api.mdx
+++ b/storage/object/api-cli/bucket-website-api.mdx
@@ -179,7 +179,7 @@ If you want your website to be accessible, you need to set up a bucket policy.
### Configuring your URL
-You can access your website using the website endpoint of your bucket, generated by s3 under the default format:
+You can access your website using the website endpoint of your bucket, generated by Amazon S3 under the default format:
`https://.s3-website..scw.cloud`
diff --git a/storage/object/api-cli/combining-iam-and-object-storage.mdx b/storage/object/api-cli/combining-iam-and-object-storage.mdx
index 72d993fa08..d01825e3c9 100644
--- a/storage/object/api-cli/combining-iam-and-object-storage.mdx
+++ b/storage/object/api-cli/combining-iam-and-object-storage.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: Combining IAM and bucket policies to set up granular access to Object Storage
paragraph: Integrate IAM with Scaleway Object Storage for enhanced access control.
-tags: object storage command bucket s3 iam permissions acl policy
+tags: object storage command bucket amazon-s3 iam permissions acl policy
dates:
validation: 2024-05-14
posted: 2023-01-17
diff --git a/storage/object/api-cli/installing-minio-client.mdx b/storage/object/api-cli/installing-minio-client.mdx
index eb2d6991cb..832e0c645d 100644
--- a/storage/object/api-cli/installing-minio-client.mdx
+++ b/storage/object/api-cli/installing-minio-client.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) (`mc`) is a command-line tool that allows you to manage your s3 projects, providing a modern alternative to UNIX commands.
+The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) (`mc`) is a command-line tool that allows you to manage your Object Storage projects, providing a modern alternative to UNIX commands.
diff --git a/storage/object/api-cli/installing-rclone.mdx b/storage/object/api-cli/installing-rclone.mdx
index 754693cc3e..3e85740fc3 100644
--- a/storage/object/api-cli/installing-rclone.mdx
+++ b/storage/object/api-cli/installing-rclone.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-[Rclone](https://rclone.org) is a command-line tool that can be used to manage your cloud storage. It communicates with any S3-compatible cloud storage provider as well as other storage platforms.
+[Rclone](https://rclone.org) is a command-line tool that can be used to manage your cloud storage. It communicates with any Amazon S3-compatible cloud storage provider as well as other storage platforms.
Follow the instructions given in the [official Rclone documentation here](https://rclone.org/install/) to install Rclone.
@@ -79,7 +79,7 @@ For example, on Linux:
```
3. Type `s3` and hit enter to confirm this storage type. The following output displays:
```
- Choose your S3 provider.
+ Choose your Amazon S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
@@ -106,10 +106,10 @@ For example, on Linux:
\ "TencentCOS"
12 / Wasabi Object Storage
\ "Wasabi"
- 13 / Any other S3 compatible provider
+ 13 / Any other Amazon S3 compatible provider
\ "Other"
```
-4. Type `Scaleway` and hit enter to confirm this S3 provider. The following output displays:
+4. Type `Scaleway` and hit enter to confirm this Amazon S3 provider. The following output displays:
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
@@ -252,6 +252,6 @@ For example, on Linux:
If you want to be able to transfer data to or from a bucket in a different region to the one you just set up, repeat steps 1-14 again to set up a new remote in the required region. Simply enter the required region at steps 7 and 8. Similarly, you may wish to set up a new remote for a different Object Storage provider.
-For further information, refer to the official [RClone S3 Object Storage Documentation](https://rclone.org/s3/). Official documentation also exists for [other storage backends](https://rclone.org/docs/).
+For further information, refer to the official [RClone Object Storage Documentation](https://rclone.org/s3/). Official documentation also exists for [other storage backends](https://rclone.org/docs/).
diff --git a/storage/object/api-cli/lifecycle-rules-api.mdx b/storage/object/api-cli/lifecycle-rules-api.mdx
index 969274fa17..b841ab1112 100644
--- a/storage/object/api-cli/lifecycle-rules-api.mdx
+++ b/storage/object/api-cli/lifecycle-rules-api.mdx
@@ -37,7 +37,7 @@ Currently, the **expiration**, **transition**, and **incomplete multipart upload
There might, for example, be a need to store log files for a week or a month, after which they become obsolete. It is possible to set a lifecycle rule to delete them automatically when they become obsolete. If you consider that a 3-month-old object is rarely used but still has a value, you might want to configure a rule to send it automatically to [Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/), for example.
-Lifecycle management on Object Storage is available on every AWS S3 compliant tool (sdk, aws-cli, boto, etc), as well as from the Scaleway [console](https://console.scaleway.com/organization).
+Lifecycle management on Object Storage is available on every Amazon S3 compliant tool (sdk, aws-cli, boto, etc), as well as from the Scaleway [console](https://console.scaleway.com/organization).
## Lifecycle specification
diff --git a/storage/object/api-cli/manage-bucket-permissions-ip.mdx b/storage/object/api-cli/manage-bucket-permissions-ip.mdx
index 2cb9c1de20..a42c76fa32 100644
--- a/storage/object/api-cli/manage-bucket-permissions-ip.mdx
+++ b/storage/object/api-cli/manage-bucket-permissions-ip.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-You can stipulate which IP addresses or IP ranges have access or permission to perform S3 operations on your buckets by creating a [bucket policy](/storage/object/api-cli/bucket-policy/) with the `IpAddress` or `NotIpAddress` conditions.
+You can stipulate which IP addresses or IP ranges have access or permission to perform operations on your buckets by creating a [bucket policy](/storage/object/api-cli/bucket-policy/) with the `IpAddress` or `NotIpAddress` conditions.
It is possible to `Allow` actions for a specific IP address or range of IPs, using the `IpAddress` condition and the `aws:SourceIp` condition key.
diff --git a/storage/object/api-cli/managing-lifecycle-cliv2.mdx b/storage/object/api-cli/managing-lifecycle-cliv2.mdx
index 043c2a1b5f..df8185b9ff 100644
--- a/storage/object/api-cli/managing-lifecycle-cliv2.mdx
+++ b/storage/object/api-cli/managing-lifecycle-cliv2.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a service based on the S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can create and manage your Object Storage resources from the [console](https://console.scaleway.com/login), or via the [Scaleway Command Line Interface](/developer-tools/scaleway-cli/quickstart/) that uses external tools such as `rclone`, `s3cmd` and `mc`.
+[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a service based on the Amazon S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can create and manage your Object Storage resources from the [console](https://console.scaleway.com/login), or via the [Scaleway Command Line Interface](/developer-tools/scaleway-cli/quickstart/) that uses external tools such as `rclone`, `s3cmd` and `mc`.
## Scaleway Command Line Interface Overview
@@ -27,7 +27,7 @@ categories:
- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/)
- An [Object Storage bucket](/storage/object/how-to/create-a-bucket/)
- Installed and initialized the [Scaleway CLI](/developer-tools/scaleway-cli/quickstart/)
-- Downloaded [S3cmd](https://github.com/s3tools/s3cmd), [rclone](https://rclone.org/downloads/) and [mc](https://github.com/minio/mc) s3 tools
+- Downloaded [S3cmd](https://github.com/s3tools/s3cmd), [rclone](https://rclone.org/downloads/) and [mc](https://github.com/minio/mc)
## Creating a configuration file for the Scaleway CLI
@@ -98,7 +98,7 @@ categories:
```
-## Installing a configuration file for S3 tools (s3cmd, rclone, and mc)
+## Installing a configuration file for Amazon S3-compatible tools (s3cmd, rclone, and mc)
1. Run the following command in a terminal to install a configuration file for `s3cmd`:
```
@@ -210,7 +210,7 @@ Run the following command in a terminal to remove an object from your bucket:
```
- For more information about the s3 tools used in this documentation, refer to the official [rclone](https://rclone.org/docs/), [s3cmd](https://s3tools.org/s3cmd-howto), and [mc](https://github.com/minio/mc) documentation.
+ For more information about the Amazon S3-compatible tools used in this documentation, refer to the official [rclone](https://rclone.org/docs/), [s3cmd](https://s3tools.org/s3cmd-howto), and [mc](https://github.com/minio/mc) documentation.
diff --git a/storage/object/api-cli/migrating-buckets.mdx b/storage/object/api-cli/migrating-buckets.mdx
index de9e5eda2a..601aaa1d57 100644
--- a/storage/object/api-cli/migrating-buckets.mdx
+++ b/storage/object/api-cli/migrating-buckets.mdx
@@ -25,7 +25,7 @@ categories:
```
aws s3api create-bucket --bucket BUCKET-TARGET
```
-2. Copy the objects between the S3 buckets.
+2. Copy the objects between the Object Storage buckets.
If you have objects in the Scaleway `Glacier` storage class you must [restore](/storage/object/how-to/restore-an-object-from-glacier/) them before continuing.
diff --git a/storage/object/api-cli/object-operations.mdx b/storage/object/api-cli/object-operations.mdx
index 3014f58f22..3301b14371 100644
--- a/storage/object/api-cli/object-operations.mdx
+++ b/storage/object/api-cli/object-operations.mdx
@@ -388,7 +388,7 @@ aws s3api put-object --bucket BucketName --key dir-1/ObjectName --body ObjectNam
```
- To define the [storage class](/storage/object/concepts/#storage-class) of the object directly upon creation, use the `--storage-class ` option with `awscli` or add the `x-amz-storage-class: ` header when using the S3 API. You can specify one of the following classes: `STANDARD`, `ONEZONE_IA`, `GLACIER`. Example: `x-amz-storage-class: ONEZONE_IA`.
+To define the [storage class](/storage/object/concepts/#storage-class) of the object directly upon creation, use the `--storage-class ` option with `awscli` or add the `x-amz-storage-class: ` header when using the Amazon S3 API. You can specify one of the following classes: `STANDARD`, `ONEZONE_IA`, `GLACIER`. Example: `x-amz-storage-class: ONEZONE_IA`.
If no class is specified, the object is created as STANDARD by default.
diff --git a/storage/object/api-cli/object-storage-aws-cli.mdx b/storage/object/api-cli/object-storage-aws-cli.mdx
index d47b5abfb0..6131cfa15c 100644
--- a/storage/object/api-cli/object-storage-aws-cli.mdx
+++ b/storage/object/api-cli/object-storage-aws-cli.mdx
@@ -39,7 +39,7 @@ The AWS-CLI is an open-source tool built on top of the [AWS SDK for Python (Boto
3. When prompted, enter the following elements:
- your API access key
- your API secret key
- - your preferred default S3 region (`fr-par`, `nl-ams`, or `pl-waw`)
+ - your preferred default Object Storage region (`fr-par`, `nl-ams`, or `pl-waw`)
- `json` as the default output format
4. Open the `~/.aws/config` file in a code editor and edit it as follows:
diff --git a/storage/object/api-cli/post-object.mdx b/storage/object/api-cli/post-object.mdx
index 3c5bec78e3..cd7f883be8 100644
--- a/storage/object/api-cli/post-object.mdx
+++ b/storage/object/api-cli/post-object.mdx
@@ -87,7 +87,7 @@ import hashlib
ACCESS_KEY_ID = "SCWXXXXXXXXXXXXXXXXX"
SECRET_ACCESS_KEY = "110e8400-e29b-11d4-a716-446655440000"
-# S3 Region
+# Object Storage Region
REGION = "fr-par"
# Example for the demo
@@ -213,7 +213,7 @@ import requests
from botocore.exceptions import ClientError
-# Generate a presigned URL for the S3 object
+# Generate a presigned URL for the object
session = boto3.session.Session()
s3_client = session.client(
diff --git a/storage/object/api-cli/setting-cors-rules.mdx b/storage/object/api-cli/setting-cors-rules.mdx
index ffa736bc65..09ec7fb09c 100644
--- a/storage/object/api-cli/setting-cors-rules.mdx
+++ b/storage/object/api-cli/setting-cors-rules.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: Setting CORS rules on Object Storage buckets
paragraph: Set CORS rules to manage cross-origin requests in Scaleway Object Storage.
-tags: object storage object-storage s3 bucket cors cors-rule
+tags: object storage object-storage aws-s3 bucket cors cors-rule
dates:
validation: 2024-06-17
posted: 2021-05-19
diff --git a/storage/object/api-cli/using-api-call-list.mdx b/storage/object/api-cli/using-api-call-list.mdx
index 668dae9fd7..ab1f818db4 100644
--- a/storage/object/api-cli/using-api-call-list.mdx
+++ b/storage/object/api-cli/using-api-call-list.mdx
@@ -1,9 +1,9 @@
---
meta:
- title: Scaleway Object Storage supported S3 API calls
+ title: Supported Object Storage API calls
description: Learn how to use the API call list effectively with Scaleway Object Storage.
content:
- h1: Object Storage API
+ h1: Supported Object Storage API calls
paragraph: Learn how to use the API call list effectively with Scaleway Object Storage.
tags: object storage object-storage api bucket
dates:
@@ -67,7 +67,7 @@ Status:
| PutBucketLifecycle | Creates a new lifecycle configuration or replaces an existing bucket lifecycle configuration | ❗ |
| PutBucketLifecycleConfiguration| Creates a new lifecycle configuration or replaces an existing bucket lifecycle configuration | ✅ |
| PutBucketNotification | Enables notifications of specified events for a bucket | ⌛ |
-| [PutBucketPolicy](/storage/object/api-cli/bucket-operations/#putbucketpolicy) | Applies an S3 bucket policy to an S3 bucket. The key elements of bucket policy are [Version](/storage/object/api-cli/bucket-policy/#version), [ID](/storage/object/api-cli/bucket-policy/#id), [Statement](/storage/object/api-cli/bucket-policy/#statement), [Sid](/storage/object/api-cli/bucket-policy/#sid), [Principal](/storage/object/api-cli/bucket-policy/#principal), [Action](/storage/object/api-cli/bucket-policy/#action), [Effect](/storage/object/api-cli/bucket-policy/#effect), [Resource](/storage/object/api-cli/bucket-policy/#resource) and [Condition](/storage/object/api-cli/bucket-policy/#condition). You can find out more about each element by clicking the links, or consulting the full documentation | ✅ |
+| [PutBucketPolicy](/storage/object/api-cli/bucket-operations/#putbucketpolicy) | Applies an Object Storage bucket policy to an Object Storage bucket. The key elements of bucket policy are [Version](/storage/object/api-cli/bucket-policy/#version), [ID](/storage/object/api-cli/bucket-policy/#id), [Statement](/storage/object/api-cli/bucket-policy/#statement), [Sid](/storage/object/api-cli/bucket-policy/#sid), [Principal](/storage/object/api-cli/bucket-policy/#principal), [Action](/storage/object/api-cli/bucket-policy/#action), [Effect](/storage/object/api-cli/bucket-policy/#effect), [Resource](/storage/object/api-cli/bucket-policy/#resource) and [Condition](/storage/object/api-cli/bucket-policy/#condition). You can find out more about each element by clicking the links, or consulting the full documentation | ✅ |
| [PutBucketTagging](/storage/object/api-cli/bucket-operations/#putbuckettagging) | Sets the tag(s) of a bucket | ✅ |
| [PutBucketVersioning](/storage/object/api-cli/bucket-operations/#putbucketversioning) | Sets the versioning state of an existing bucket | ✅ |
| [PutBucketWebsite](/storage/object/api-cli/bucket-operations/#putbucketwebsite) | Enables bucket website and sets the basic configuration for the website | ✅ |
diff --git a/storage/object/concepts.mdx b/storage/object/concepts.mdx
index ccc398fa4f..5efe1d4848 100644
--- a/storage/object/concepts.mdx
+++ b/storage/object/concepts.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: Object Storage - Concepts
paragraph: Understand key concepts and features of Scaleway Object Storage.
-tags: retention endpoint object-storage storage bucket acl multipart object s3 retention signature versioning archived
+tags: retention endpoint object-storage storage bucket acl multipart object amazon-s3 retention signature versioning archived
dates:
validation: 2024-05-06
categories:
@@ -15,7 +15,7 @@ categories:
## Access control list (ACL)
-Access control lists (ACL) are subresources attached to buckets and objects. They define which Scaleway users have access to the attached object/bucket, and the type of access they have. Whenever a user makes a request against a resource, s3 checks its ACL and verifies that they have permission to carry out the request.
+control lists (ACL) are subresources attached to buckets and objects. They define which Scaleway users have access to the attached object/bucket, and the type of access they have. Whenever a user makes a request against a resource, Amazon S3 checks its ACL and verifies that they have permission to carry out the request.
## Bucket
@@ -86,13 +86,13 @@ An object is a file and the metadata that describes it. Each object has a **key
## Object lock
-An S3 API feature that allows users to lock objects to prevent them from being deleted or overwritten. Objects can be put on lock for a specific amount of time or indefinitely. The lock period is defined by the user.
+An Amazon S3 API feature that allows users to lock objects to prevent them from being deleted or overwritten. Objects can be put on lock for a specific amount of time or indefinitely. The lock period is defined by the user.
The feature uses a write-once-read-many (WORM) data protection model. This model is generally used in cases where data cannot be altered once it has been written. It provides regulatory compliance and protection against ransomware and malicious or accidental deletion of objects.
## Object Storage
-A storage service based on the S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can upload, download, and visualize stored objects.
+A storage service based on the Amazon S3 protocol. It allows you to store different types of objects (documents, images, videos, etc.) and distribute them instantly, anywhere in the world. You can upload, download, and visualize stored objects.
Contrary to other storage types such as block devices or file systems, Object Storage bundles the data itself along with metadata [tags](#tags) and a [prefix](#prefix), rather than a file name and a file path.
@@ -141,13 +141,13 @@ Object Lock provides two modes to manage object retention, **Compliance** and **
A retention period specifies a fixed period for which an object remains locked. During this period, your object is WORM-protected and cannot be overwritten or deleted.
-## S3
+## Amazon S3
-S3 is the de facto Object Storage protocol. Scaleway Object Storage officially supports a subset of S3. The list of supported features is described in the [Object Storage API documentation](/storage/object/api-cli/using-api-call-list/).
+Amazon S3 is the de facto Object Storage protocol. Scaleway Object Storage officially supports a subset of Amazon S3. The list of supported features is described in the [Object Storage API documentation](/storage/object/api-cli/using-api-call-list/).
## Signature V2, Signature V4
-When you send HTTP requests to Object Storage, you sign the requests so that we can identify who sent them. You sign requests with your Scaleway access key, which consists of an access key and a secret key. The two main s3 protocols for authentication are Signature v2 and Signature v4. Signature v4 is more recent and it is the recommended version.
+When you send HTTP requests to Object Storage, you sign the requests so that we can identify who sent them. You sign requests with your Scaleway access key, which consists of an access key and a secret key. The two main Amazon S3 protocols for authentication are Signature v2 and Signature v4. Signature v4 is more recent and it is the recommended version.
## Storage class
diff --git a/storage/object/how-to/create-bucket-policy.mdx b/storage/object/how-to/create-bucket-policy.mdx
index 1f7a5ee149..99658d2e37 100644
--- a/storage/object/how-to/create-bucket-policy.mdx
+++ b/storage/object/how-to/create-bucket-policy.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: How to create and manage bucket policies using the console
paragraph: Create and apply bucket policies for Object Storage.
-tags: bucket policy bucket console object storage s3 access
+tags: bucket policy bucket console object storage aws-s3 access
dates:
validation: 2024-05-30
posted: 2024-05-30
diff --git a/storage/object/how-to/restore-an-object-from-glacier.mdx b/storage/object/how-to/restore-an-object-from-glacier.mdx
index 2deb91fb5c..0a2931dfdc 100644
--- a/storage/object/how-to/restore-an-object-from-glacier.mdx
+++ b/storage/object/how-to/restore-an-object-from-glacier.mdx
@@ -41,7 +41,7 @@ categories:
4. Enter the number of days after which the object will be transferred back to `Glacier`, or click the toggle to permanently restore the object.
-5. Click **Restore object from S3 Glacier**.
+5. Click **Restore object from Glacier**.
Your object remains available in `Standard` class for the duration you specified. It will be transferred automatically back to `Glacier` once the configured period is over.
diff --git a/storage/object/how-to/upload-files-into-a-bucket.mdx b/storage/object/how-to/upload-files-into-a-bucket.mdx
index 4e67015b71..7605aa6c75 100644
--- a/storage/object/how-to/upload-files-into-a-bucket.mdx
+++ b/storage/object/how-to/upload-files-into-a-bucket.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-This page explains how to upload files into an Object Storage bucket using the [Scaleway console](https://consol.scaleway.com). To upload an object using the S3 API, refer to the [dedicated documentation](/storage/object/api-cli/object-operations/#putobject).
+This page explains how to upload files into an Object Storage bucket using the [Scaleway console](https://consol.scaleway.com). To upload an object using the Amazon S3 API, refer to the [dedicated documentation](/storage/object/api-cli/object-operations/#putobject).
diff --git a/storage/object/index.mdx b/storage/object/index.mdx
index f85a357ae2..bc14b1306a 100644
--- a/storage/object/index.mdx
+++ b/storage/object/index.mdx
@@ -7,7 +7,7 @@ meta:
@@ -65,7 +65,7 @@ meta:
label="Read more"
/>
@@ -75,7 +75,7 @@ meta:
diff --git a/storage/object/quickstart.mdx b/storage/object/quickstart.mdx
index b96d0ece6a..37bbd9f037 100644
--- a/storage/object/quickstart.mdx
+++ b/storage/object/quickstart.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-[Scaleway Object Storage](/storage/object/concepts/#object-storage) is an Object Storage service based on the S3 protocol. It allows you to store any objects (documents, images, videos, etc.) and access them anytime from anywhere in the world. You can manage your storage directly from the Scaleway console. On the control panel, you can easily upload, download, and visualize the objects in your buckets. In addition, you can integrate many existing libraries or CLI clients into your application or scripts.
+[Scaleway Object Storage](/storage/object/concepts/#object-storage) is an Object Storage service based on the Amazon S3 protocol. It allows you to store any objects (documents, images, videos, etc.) and access them anytime from anywhere in the world. You can manage your storage directly from the Scaleway console. On the control panel, you can easily upload, download, and visualize the objects in your buckets. In addition, you can integrate many existing libraries or CLI clients into your application or scripts.
diff --git a/storage/object/reference-content/optimize-object-storage-performance.mdx b/storage/object/reference-content/optimize-object-storage-performance.mdx
index 844cc43d67..9be71f2878 100644
--- a/storage/object/reference-content/optimize-object-storage-performance.mdx
+++ b/storage/object/reference-content/optimize-object-storage-performance.mdx
@@ -14,7 +14,7 @@ categories:
- object-storage
---
-[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a highly resilient and versatile service that guarantees the reliability and accessibility of your data, while being fully [S3-compatible](/storage/object/concepts/#s3) and user-friendly.
+[Scaleway Object Storage](/storage/object/concepts/#object-storage) is a highly resilient and versatile service that guarantees the reliability and accessibility of your data, while being fully [Amazon S3-compatible](/storage/object/concepts/#s3) and user-friendly.
Even though it is designed to provide best-in-class latency and throughput, user infrastructure plays a predominant role in achieving optimum efficiency, as many different factors can have an impact on performance, such as your hardware, your software stack, or the way you manage your objects.
@@ -50,7 +50,7 @@ For example, if the most CPU-intensive operation uses 20% of your CPU, you can e
### Geographic location
-The physical distance to the hardware hosting your Object Storage can also have an impact on performance, especially on latency. Make sure to benchmark the different [regions](/storage/object/concepts/##region-and-availability-zone) where Object Storage is available to compare latency on your mission-critical S3 operations.
+The physical distance to the hardware hosting your Object Storage can also have an impact on performance, especially on latency. Make sure to benchmark the different [regions](/storage/object/concepts/##region-and-availability-zone) where Object Storage is available to compare latency on your mission-critical operations.
For instance, media and content distribution are often heavily affected by the physical distance between the host and the client, as objects are usually large in this scenario.
diff --git a/storage/object/reference-content/s3-iam-permissions-equivalence.mdx b/storage/object/reference-content/s3-iam-permissions-equivalence.mdx
index 6a6fc1c990..896b1cb3ac 100644
--- a/storage/object/reference-content/s3-iam-permissions-equivalence.mdx
+++ b/storage/object/reference-content/s3-iam-permissions-equivalence.mdx
@@ -1,11 +1,11 @@
---
meta:
- title: S3 and IAM permissions equivalence
- description: Understand how IAM permissions in S3 relate to Scaleway Object Storage.
+ title: Amazon S3 and IAM permissions equivalence
+ description: Understand how IAM permissions in Amazon S3 relate to Scaleway Object Storage.
content:
- h1: S3 and IAM permissions equivalence
- paragraph: Understand how IAM permissions in S3 relate to Scaleway Object Storage.
-tags: object-storage s3 aws action equivalent iam permission set
+ h1: Amazon S3 and IAM permissions equivalence
+ paragraph: Understand how IAM permissions in Amazon S3 relate to Scaleway Object Storage.
+tags: object-storage amazon-s3 aws action equivalent iam permission set
categories:
- storage
- object
@@ -13,7 +13,7 @@ categories:
## ObjectStorageFullAccess
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------| ------------ |------------|------------|
| DeleteBucketPolicy | Policy | Write | ✅ |
| GetBucketPolicy | Policy | Read | ✅ |
@@ -72,7 +72,7 @@ categories:
## ObjectStorageReadOnly
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
| ------------------------------- | ------------ | ---------- | -----------|
| AbortMultipartUpload | Object | Delete | |
| CompleteMultipartUpload | Object | Create | |
@@ -131,7 +131,7 @@ categories:
## ObjectStorageBucketsRead
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------|--------------|------------|------------|
| AbortMultipartUpload | Object | Delete | |
| CompleteMultipartUpload | Object | Create | |
@@ -190,7 +190,7 @@ categories:
## ObjectStorageBucketsWrite
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------|--------------|------------|------------|
| AbortMultipartUpload | Object | Delete | |
| CompleteMultipartUpload | Object | Create | |
@@ -249,7 +249,7 @@ categories:
## ObjectStorageBucketsDelete
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------|--------------|------------|------------|
| AbortMultipartUpload | Object | Delete | |
| CompleteMultipartUpload | Object | Create | |
@@ -308,7 +308,7 @@ categories:
## ObjectStorageObjectsRead
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------|--------------|------------|------------|
| AbortMultipartUpload | Object | Delete | |
| CompleteMultipartUpload | Object | Create | |
@@ -367,7 +367,7 @@ categories:
## ObjectStorageObjectsWrite
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------|--------------|------------|------------|
| AbortMultipartUpload | Object | Delete | |
| CompleteMultipartUpload | Object | Create | ✅ |
@@ -426,7 +426,7 @@ categories:
## ObjectStorageObjectsDelete
-| S3 Action | IAM Resource | IAM Action | Authorized |
+| Amazon S3 Action | IAM Resource | IAM Action | Authorized |
|---------------------------------|--------------|------------|------------|
| AbortMultipartUpload | Object | Delete | ✅ |
| CompleteMultipartUpload | Object | Create | |
diff --git a/storage/object/troubleshooting/api-key-does-not-work.mdx b/storage/object/troubleshooting/api-key-does-not-work.mdx
index 2f6847ad2a..38bd43388d 100644
--- a/storage/object/troubleshooting/api-key-does-not-work.mdx
+++ b/storage/object/troubleshooting/api-key-does-not-work.mdx
@@ -28,9 +28,9 @@ When using third-party API or CLI tools, such as the [AWS CLI](/storage/object/a
## Cause
-The API key you used to configure the S3 third-party tool has a [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) assigned.
+The API key you used to configure the Amazon S3 third-party tool has a [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) assigned.
-If you try to perform S3 operations in a Project that is **NOT** the [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) using a third-party tool, you will not be able to access your resources, resulting in an error message or an empty response.
+If you try to perform Object Storage operations in a Project that is **NOT** the [preferred Project](/identity-and-access-management/iam/concepts/#preferred-project) using a third-party tool, you will not be able to access your resources, resulting in an error message or an empty response.
## Solution
@@ -39,14 +39,14 @@ You can change the preferred project of your API key:
- by editing it from the [Scaleway console](/identity-and-access-management/iam/how-to/manage-api-keys/#how-to-edit-an-api-key)
- by [overriding it while making an API call](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/#overriding-the-preferred-project-when-making-a-call)
-You should now be able to list your buckets using a supported S3-compatible third-party tool.
+You should now be able to list your buckets using a supported Amazon Amazon S3-compatible third-party tool.
## Going further
- Refer to the documentation on [using IAM API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information.
- If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
- - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+ - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/storage/object/troubleshooting/cannot-access-data.mdx b/storage/object/troubleshooting/cannot-access-data.mdx
index d3e3b36adc..11d994b942 100644
--- a/storage/object/troubleshooting/cannot-access-data.mdx
+++ b/storage/object/troubleshooting/cannot-access-data.mdx
@@ -26,7 +26,7 @@ I am experiencing issues while trying to access my buckets and objects stored on
- Go to the [Status page](https://status.scaleway.com/) to see if there is an ongoing incident on the Scaleway infrastructure.
-- Retrieve the logs of your buckets using any S3-compatible tool to identify the cause of the problem:
+- Retrieve the logs of your buckets using any Amazon S3-compatible tool to identify the cause of the problem:
- [Rclone](https://rclone.org/docs/#logging)
- [S3cmd](https://s3tools.org/usage)
- [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-logs.html#mc-admin-logs)
@@ -39,7 +39,7 @@ I am experiencing issues while trying to access my buckets and objects stored on
## Going further
If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
- - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+ - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/storage/object/troubleshooting/cannot-delete-bucket.mdx b/storage/object/troubleshooting/cannot-delete-bucket.mdx
index eb638de802..b45e84944c 100644
--- a/storage/object/troubleshooting/cannot-delete-bucket.mdx
+++ b/storage/object/troubleshooting/cannot-delete-bucket.mdx
@@ -40,7 +40,7 @@ I cannot delete my Scaleway Object Storage bucket.
- Refer to the documentation on [how to delete a bucket](/storage/object/how-to/delete-a-bucket/) for more information.
- If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
- - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+ - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/storage/object/troubleshooting/cannot-restore-glacier.mdx b/storage/object/troubleshooting/cannot-restore-glacier.mdx
index f651beb259..c8cd4dd1eb 100644
--- a/storage/object/troubleshooting/cannot-restore-glacier.mdx
+++ b/storage/object/troubleshooting/cannot-restore-glacier.mdx
@@ -56,7 +56,7 @@ The `"Restore": "ongoing-request=\"true\"",` line indicates that the restore ope
- Refer to the documentation on [how to restore objects from Glacier](/storage/object/how-to/restore-an-object-from-glacier/) for more information.
- If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
- - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+ - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx b/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx
index 6e449d0ebb..742c97fdf6 100644
--- a/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx
+++ b/storage/object/troubleshooting/lost-bucket-access-bucket-policy.mdx
@@ -73,7 +73,7 @@ If you have the permission to apply a bucket policy, you can also delete it. To
- Refer to the [bucket policies overview](/storage/object/api-cli/bucket-policy/) for more information on the different elements of a bucket policy.
- If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
- - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+ - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/storage/object/troubleshooting/low-performance.mdx b/storage/object/troubleshooting/low-performance.mdx
index ed54488bb7..f5e554baa7 100644
--- a/storage/object/troubleshooting/low-performance.mdx
+++ b/storage/object/troubleshooting/low-performance.mdx
@@ -26,7 +26,7 @@ I am noticing decreased throughputs, timeouts, high latency, and overall instabi
- Go to the [Status page](https://status.scaleway.com/) to see if there is an ongoing incident on the Scaleway infrastructure.
-- Retrieve the logs of your buckets using any S3-compatible tool to identify the cause of the problem:
+- Retrieve the logs of your buckets using any Amazon S3-compatible tool to identify the cause of the problem:
- [Rclone](https://rclone.org/docs/#logging)
- [S3cmd](https://s3tools.org/usage)
- [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc-admin/mc-admin-logs.html#mc-admin-logs)
@@ -37,7 +37,7 @@ I am noticing decreased throughputs, timeouts, high latency, and overall instabi
- Refer to the documentation on [how to optimize your Object Storage performance](/storage/object/reference-content/optimize-object-storage-performance/) for more information.
- If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
- - S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+ - Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/styles/scw_styles/HeadingSentenceCase.yml b/styles/scw_styles/HeadingSentenceCase.yml
index bc3fe0cc45..31cb9d6797 100644
--- a/styles/scw_styles/HeadingSentenceCase.yml
+++ b/styles/scw_styles/HeadingSentenceCase.yml
@@ -46,7 +46,7 @@ exceptions:
- Object Storage
- Glacier
- Standard
- - S3
+ - Amazon S3
- Block Storage
- Managed Database
- Managed Databases
diff --git a/tutorials/abort-multipart-upload-minio/index.mdx b/tutorials/abort-multipart-upload-minio/index.mdx
index 0520f03bbf..b0b871f235 100644
--- a/tutorials/abort-multipart-upload-minio/index.mdx
+++ b/tutorials/abort-multipart-upload-minio/index.mdx
@@ -1,10 +1,10 @@
---
meta:
- title: Aborting Incomplete S3 Multipart Uploads with MinIO Client
- description: This page explains how to abort an incomplete S3 multipart upload with the MinIO client.
+ title: Aborting Incomplete Multipart Uploads with MinIO Client
+ description: This page explains how to abort an incomplete multipart upload with the MinIO client.
content:
- h1: Aborting Incomplete S3 Multipart Uploads with MinIO Client
- paragraph: This page explains how to abort an incomplete S3 multipart upload with the MinIO client.
+ h1: Aborting Incomplete Multipart Uploads with MinIO Client
+ paragraph: This page explains how to abort an incomplete multipart upload with the MinIO client.
tags: minio multipart-uploads
categories:
- object-storage
@@ -13,13 +13,13 @@ dates:
hero: assets/scaleway_minio.webp
---
-## S3 Object Storage - Multipart Upload Overview
+## Object Storage - Multipart Upload Overview
[Multipart Uploads](/storage/object/api-cli/multipart-uploads/) allows you to upload large files (up to 5 TB) to the Object Storage platform in multiple parts. This allows faster, more flexible uploads.
If you do not complete a multipart upload, all the uploaded parts will still be stored and counted as part of your storage usage. Multipart uploads can be aborted manually [via the API and CLI](/storage/object/api-cli/multipart-uploads/#aborting-a-multipart-upload) or automatically using a [Lifecycle rule](/storage/object/api-cli/lifecycle-rules-api/#setting-rules-for-incomplete-multipart-uploads).
-If you use the API or the AWS CLI, you will have to abort each incomplete multipart upload independently. However, there is an easier and faster way to abort multipart uploads, using the open-source S3-compatible client [mc](https://github.com/minio/mc), from MinIO. In this tutorial, we show you how to use mc to abort and clean up all your incomplete multipart uploads at once.
+If you use the API or the AWS CLI, you will have to abort each incomplete multipart upload independently. However, there is an easier and faster way to abort multipart uploads, using the open-source Amazon S3-compatible client [mc](https://github.com/minio/mc), from MinIO. In this tutorial, we show you how to use mc to abort and clean up all your incomplete multipart uploads at once.
diff --git a/tutorials/building-ai-application-function-calling/assets/function-calling.webp b/tutorials/building-ai-application-function-calling/assets/function-calling.webp
new file mode 100644
index 0000000000..befbc5094d
Binary files /dev/null and b/tutorials/building-ai-application-function-calling/assets/function-calling.webp differ
diff --git a/tutorials/building-ai-application-function-calling/index.mdx b/tutorials/building-ai-application-function-calling/index.mdx
new file mode 100644
index 0000000000..81c44c6940
--- /dev/null
+++ b/tutorials/building-ai-application-function-calling/index.mdx
@@ -0,0 +1,277 @@
+---
+meta:
+ title: Get started with agentic AI - building a flight assistant with function calling on open-weight Llama 3.1
+ description: Learn how to implement function calling in your applications using a practical flight schedule example.
+content:
+ h1: Get started with agentic AI - building a flight assistant with function calling on open-weight Llama 3.1
+ paragraph: Create a smart flight assistant that can understand natural language queries and return structured flight information using function calling capabilities.
+tags: AI function-calling LLM python structured-data
+categories:
+ - managed-inference
+ - generative-apis
+hero: assets/function-calling.webp
+dates:
+ validation: 2024-10-25
+ posted: 2024-10-25
+---
+
+In today's AI-driven world, enabling natural language interactions with structured data systems has become increasingly important. Function calling allows AI models like Llama 3.1 to bridge the gap between human queries and programmatic functions, creating powerful agents for many use cases.
+
+This tutorial will guide you through creating a simple flight schedule assistant that can understand natural language queries about flights and return structured information. We'll use Python and the OpenAI SDK to implement function calling on Llama 3.1, making it easy to integrate this solution into your existing applications.
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- Python 3.7 or higher
+- An API key from Scaleway [Identity and Access Management](https://www.scaleway.com/en/docs/identity-and-access-management/iam/)
+- Access to Scaleway [Generative APIs](/ai-data/generative-apis/quickstart/) or to Scaleway [Managed Inference](/ai-data/managed-inference/quickstart/)
+- The `openai` Python library installed
+
+## Understanding function calling
+
+Function calling allows AI models to:
+- Understand when to use specific functions based on user queries
+- Extract relevant parameters from natural language
+- Format the extracted information into structured function calls
+- Process the function results and present them in a user-friendly way
+
+## Setting up the environment
+
+1. Create a new directory for your project:
+ ```
+ mkdir flight-assistant
+ cd flight-assistant
+ ```
+
+2. Create and activate a virtual environment:
+ ```
+ python3 -m venv venv
+ source venv/bin/activate # On Windows, use `venv\Scripts\activate`
+ ```
+
+3. Install the required library:
+ ```
+ pip install openai
+ ```
+
+## Creating the flight schedule function
+
+First, let's create a simple function that returns flight schedules. Create a file called `flight_schedule.py`:
+
+```python
+def get_flight_schedule(departure_airport: str, destination_airport: str, departure_date: str) -> dict:
+ """
+ Get available flights between two airports on a specific date.
+
+ Args:
+ departure_airport (str): IATA code of departure airport (e.g., "CDG")
+ destination_airport (str): IATA code of destination airport (e.g., "LHR")
+ departure_date (str): Date in YYYY-MM-DD format
+
+ Returns:
+ dict: Available flights with their details
+ """
+ # Mock flight database - in a real application, this would query an actual database
+ flights = {
+ "CDG-LHR-2024-11-01": [
+ {
+ "flight_number": "AF123",
+ "airline": "Air France",
+ "departure_time": "08:00",
+ "arrival_time": "09:00",
+ "price": "€150"
+ },
+ {
+ "flight_number": "BA456",
+ "airline": "British Airways",
+ "departure_time": "14:00",
+ "arrival_time": "15:00",
+ "price": "€180"
+ }
+ ]
+ }
+
+ key = f"{departure_airport}-{destination_airport}-{departure_date}"
+ return flights.get(key, {"error": "No flights found for this route and date."})
+```
+
+## Setting up the AI assistant
+
+Create a new file called `assistant.py` to handle the AI interactions:
+
+```python
+from openai import OpenAI
+import os
+import json
+from flight_schedule import get_flight_schedule
+
+# Initialize the OpenAI client with Scaleway configuration
+
+MODEL="meta/llama-3.1-70b-instruct:fp8"
+# use the right name according to your Managed Inference deployment or Generative APIs model
+
+API_KEY = os.environ.get("SCALEWAY_API_KEY")
+BASE_URL = os.environ.get("SCALEWAY_INFERENCE_ENDPOINT_URL")
+# use https://api.scaleway.ai/v1 for Scaleway Generative APIs
+
+client = OpenAI(
+ base_url=BASE_URL,
+ api_key=API_KEY
+)
+
+# Define the tool specification
+tools = [{
+ "type": "function",
+ "function": {
+ "name": "get_flight_schedule",
+ "description": "Get available flights between two airports on a specific date",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "departure_airport": {
+ "type": "string",
+ "description": "IATA code of departure airport (e.g., CDG, LHR)"
+ },
+ "destination_airport": {
+ "type": "string",
+ "description": "IATA code of destination airport (e.g., CDG, LHR)"
+ },
+ "departure_date": {
+ "type": "string",
+ "description": "Date in YYYY-MM-DD format"
+ }
+ },
+ "required": ["departure_airport", "destination_airport", "departure_date"]
+ }
+ }
+}]
+
+def process_query(user_query: str) -> str:
+ """Process a natural language query about flights."""
+
+ # Initial conversation with the model
+ messages = [
+ {
+ "role": "system",
+ "content": "You are a helpful flight assistant. Help users find flights by calling the appropriate function."
+ },
+ {
+ "role": "user",
+ "content": user_query
+ }
+ ]
+
+ # Get the model's response
+ response = client.chat.completions.create(
+ model=MODEL,
+ messages=messages,
+ tools=tools,
+ tool_choice="auto"
+ )
+
+ # Check if the model wants to call a function
+ response_message = response.choices[0].message
+
+ if response_message.tool_calls:
+ # Get function call details
+ tool_call = response_message.tool_calls[0]
+ function_name = tool_call.function.name
+ function_args = json.loads(tool_call.function.arguments)
+
+ # Execute the function
+ if function_name == "get_flight_schedule":
+ function_response = get_flight_schedule(**function_args)
+
+ # Add the function result to the conversation
+ messages.append(response_message)
+ messages.append({
+ "role": "tool",
+ "content": json.dumps(function_response),
+ "tool_call_id": tool_call.id
+ })
+
+ # Get final response
+ final_response = client.chat.completions.create(
+ model=MODEL,
+ messages=messages
+ )
+
+ return final_response.choices[0].message.content
+
+ return response_message.content
+```
+
+## Creating the main application
+
+Create a file called `main.py` to run the assistant:
+
+```python
+from assistant import process_query
+
+def main():
+ print("Welcome to the Flight Schedule Assistant!")
+ print("Ask about flights using natural language (or type 'quit' to exit)")
+ print("Example: What flights are available from CDG to LHR on November 1st, 2024?")
+
+ while True:
+ query = input("\nYour query: ")
+ if query.lower() == 'quit':
+ break
+
+ response = process_query(query)
+ print("\nAssistant:", response)
+
+if __name__ == "__main__":
+ main()
+```
+
+## Running the application
+
+1. Set your Scaleway API key:
+ ```
+ export SCALEWAY_API_KEY="your-api-key-here"
+ ```
+
+2. Set your Base URL for OpenAI client:
+ ```
+ export SCALEWAY_INFERENCE_ENDPOINT_URL="your-inference-endpoint-here"
+ ```
+
+3. Run the application:
+ ```
+ python main.py
+ ```
+
+4. Try some example queries:
+ - "What flights are available from CDG to LHR on November 1st?"
+ - "Show me morning flights from CDG to LHR on November 1st"
+ - "Are there any afternoon flights from CDG to LHR on 2024-11-01?"
+
+## How it works
+
+1. **User input**: The application receives a natural language query about flights.
+
+2. **Function recognition**: The AI model analyzes the query and determines that it needs flight schedule information.
+
+3. **Parameter extraction**: The model extracts key information (airports, date) from the query.
+
+4. **Function calling**: The model returns the function call to be made by the user, in this case the `get_flight_schedule` function with the extracted parameters provided by the model.
+
+5. **Response generation**: The model receives the function's response and generates a natural language reply for the user.
+
+## Customizing the application
+
+You can enhance the flight assistant in several ways:
+
+1. **Add real data**: Replace the mock flight database with actual flight API calls.
+2. **Expand functions**: Add functions for booking flights, checking prices, or getting airport information.
+3. **Improve error handling**: Add validation for airport codes and dates.
+4. **Add memory**: Implement conversation history to handle follow-up questions.
+
+## Conclusion
+
+Function calling bridges the gap between natural language processing and structured data operations. This flight schedule assistant demonstrates how to implement function calling to create intuitive interfaces for your applications.
+
+
+ Remember to handle user data responsibly and validate all inputs before making actual flight queries or bookings in a production environment.
+
diff --git a/tutorials/ceph-cluster/index.mdx b/tutorials/ceph-cluster/index.mdx
index 56244c9992..930fa7074d 100644
--- a/tutorials/ceph-cluster/index.mdx
+++ b/tutorials/ceph-cluster/index.mdx
@@ -193,7 +193,7 @@ Deploy the Ceph cluster on your machines by following these steps:
### Deploying a Ceph Object Gateway (RGW)
-Deploy the Ceph Object Gateway (RGW) to access files using S3-compatible clients:
+Deploy the Ceph Object Gateway (RGW) to access files using Amazon S3-compatible clients:
1. Run the following command on the admin machine:
@@ -225,7 +225,7 @@ Deploy the Ceph Object Gateway (RGW) to access files using S3-compatible clients
3. Verify the installation by accessing `http://ceph-node-a:7480` in a web browser.
-## Creating S3 credentials
+## Creating Object Storage credentials
On the gateway instance (`ceph-node-a`), run the following command to create a new user:
@@ -233,7 +233,7 @@ On the gateway instance (`ceph-node-a`), run the following command to create a n
sudo radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com
```
-- Note the `access_key` and `user_key`. Proceed to configure your S3 client, e.g., [aws-cli](/storage/object/api-cli/object-storage-aws-cli/).
+- Note the `access_key` and `user_key`. Proceed to configure your Object Storage client, e.g., [aws-cli](/storage/object/api-cli/object-storage-aws-cli/).
## Configuring AWS-CLI
@@ -286,4 +286,4 @@ Use AWS-CLI to manage objects in your Ceph storage cluster:
## Conclusion
-You have successfully configured an S3-compatible storage cluster using Ceph and three [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/). You can now manage your data using any S3-compatible tool. For advanced configuration, refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/).
\ No newline at end of file
+You have successfully configured an Amazon S3-compatible storage cluster using Ceph and three [Dedibox dedicated servers](https://www.scaleway.com/en/dedibox/). You can now manage your data using any Amazon S3-compatible tool. For advanced configuration, refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/).
\ No newline at end of file
diff --git a/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx b/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx
index ce462dabf6..3346b81f9a 100644
--- a/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx
+++ b/tutorials/cilicon-self-hosted-ci-on-apple-silicon/index.mdx
@@ -95,7 +95,7 @@ provisioner:
executor: # defaults to 'shell'
maxNumberOfBuilds: # defaults to '1'
downloadLatest: # defaults to 'true'
- downloadURL: # defaults to GitLab official S3 bucket
+ downloadURL: # defaults to GitLab official Object Storage bucket
configToml: >
# Advanced config as custom config.toml file to be appended to the basic config and copied to the runner.
```
diff --git a/tutorials/configure-chef-ubuntu-xenial/index.mdx b/tutorials/configure-chef-ubuntu-xenial/index.mdx
index 9cc4c8aa9b..0fb3a6363d 100644
--- a/tutorials/configure-chef-ubuntu-xenial/index.mdx
+++ b/tutorials/configure-chef-ubuntu-xenial/index.mdx
@@ -9,7 +9,7 @@ tags: Chef Ubuntu Xenial Focal-Fossa
categories:
- instances
dates:
- validation: 2024-04-22
+ validation: 2024-10-28
posted: 2018-07-05
---
diff --git a/tutorials/configure-dvc-with-object-storage/index.mdx b/tutorials/configure-dvc-with-object-storage/index.mdx
index 6bdcbf9901..2d928cac83 100644
--- a/tutorials/configure-dvc-with-object-storage/index.mdx
+++ b/tutorials/configure-dvc-with-object-storage/index.mdx
@@ -5,7 +5,7 @@ meta:
content:
h1: Configuring DVC with Object Storage
paragraph: This page provides information on how to configure DVC with Scaleway Object Storage.
-tags: s3 dvc machine-learning data-science
+tags: amazon-s3 dvc machine-learning data-science
categories:
- object-storage
dates:
@@ -17,7 +17,7 @@ Git is unarguably the most popular and powerful version control system to store
However, when it comes to large datasets, you might need to turn to third-party version control tools that are specifically designed to handle them.
-Data Version Control (DVC) was specifically designed with this use case in mind. It works alongside Git and allows you to store your data in the remote storage of your choice (such as a Scaleway S3-enabled bucket) while storing only the metadata in a Git repository.
+Data Version Control (DVC) was specifically designed with this use case in mind. It works alongside Git and allows you to store your data in the remote storage of your choice (such as a Scaleway Object Storage bucket) while storing only the metadata in a Git repository.
In this tutorial, you learn how to use [Scaleway Object Storage](https://www.scaleway.com/en/object-storage/) as a remote storage for DVC.
@@ -39,7 +39,7 @@ In this tutorial, you learn how to use [Scaleway Object Storage](https://www.sca
pip3 install dvc
```
-2. Run the following command to install the S3 dependencies:
+2. Run the following command to install the Amazon S3 dependencies:
```bash
pip3 install "dvc[s3]"
```
@@ -93,7 +93,7 @@ In this tutorial, you learn how to use [Scaleway Object Storage](https://www.sca
dvc remote add -d myremote s3://my-bucket/path
```
-2. Run the following command to set the S3 endpoint of your remote storage:
+2. Run the following command to set the Object Storage endpoint of your remote storage:
```bash
dvc remote modify myremote \
endpointurl https://s3.fr-par.scw.cloud
diff --git a/tutorials/configure-failover-proxmox/index.mdx b/tutorials/configure-failover-proxmox/index.mdx
index 96a23b20ee..015d3a4317 100644
--- a/tutorials/configure-failover-proxmox/index.mdx
+++ b/tutorials/configure-failover-proxmox/index.mdx
@@ -9,7 +9,7 @@ tags: dedicated-server Proxmox iso-file
categories:
- dedibox
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2020-01-23
---
diff --git a/tutorials/configure-ipv6-virtual-machine-esxi/index.mdx b/tutorials/configure-ipv6-virtual-machine-esxi/index.mdx
index 2b2968ad4a..56f7ba11de 100644
--- a/tutorials/configure-ipv6-virtual-machine-esxi/index.mdx
+++ b/tutorials/configure-ipv6-virtual-machine-esxi/index.mdx
@@ -9,7 +9,7 @@ tags: esxi virtual-machine ubuntu
categories:
- dedibox
dates:
- validation: 2024-04-22
+ validation: 2024-10-28
posted: 2022-02-24
---
diff --git a/tutorials/configure-netbox-managed-postgresql-database/index.mdx b/tutorials/configure-netbox-managed-postgresql-database/index.mdx
index a4892c3409..2554e51bfb 100644
--- a/tutorials/configure-netbox-managed-postgresql-database/index.mdx
+++ b/tutorials/configure-netbox-managed-postgresql-database/index.mdx
@@ -10,13 +10,13 @@ categories:
- postgresql-and-mysql
hero: assets/scaleway_netbox.webp
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2019-11-14
---
NetBox is a web application designed and built to help manage and document large computer networks. It is designed for IP address management (IPAM) and data center infrastructure management (DCIM). The application runs as a web application based on the Django Python framework and uses a PostgreSQL database to store information. The open-source software was developed specifically with the needs of network and infrastructure engineers in mind.
-In this tutorial, you will learn how to install and configure NetBox on an Instance running on Ubuntu 20.04 LTS and a Database for PostgreSQL.
+In this tutorial, you learn how to install and configure NetBox on an Instance running on Ubuntu 20.04 LTS and a Database for PostgreSQL.
diff --git a/tutorials/configure-nextcloud-ubuntu/index.mdx b/tutorials/configure-nextcloud-ubuntu/index.mdx
index f74577d067..34c5b53bc4 100644
--- a/tutorials/configure-nextcloud-ubuntu/index.mdx
+++ b/tutorials/configure-nextcloud-ubuntu/index.mdx
@@ -9,7 +9,7 @@ categories:
- instances
tags: Nextcloud Ubuntu-Bionic-Beaver
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2018-10-26
---
diff --git a/tutorials/configure-nginx-lets-encrypt/index.mdx b/tutorials/configure-nginx-lets-encrypt/index.mdx
index 37b52519d1..834dd882f9 100644
--- a/tutorials/configure-nginx-lets-encrypt/index.mdx
+++ b/tutorials/configure-nginx-lets-encrypt/index.mdx
@@ -9,11 +9,11 @@ categories:
- instances
tags: NGINX Let's-Encrypt
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2019-02-28
---
-Let's Encrypt, a renowned Certificate Authority (CA), offers a valuable service by providing free TLS/SSL certificates.
+Let's Encrypt, a renowned Certificate Authority (CA), offers a valuable service by providing free TLS/SSL certificates.
These certificates are a key element in enabling secure HTTPS connections on web servers. Let's Encrypt simplifies the process through its user-friendly software client, Certbot, which automates the majority of the steps involved in obtaining and configuring certificates, particularly within the Nginx web server environment.
diff --git a/tutorials/configure-plex-s3/index.mdx b/tutorials/configure-plex-s3/index.mdx
index 1c51d5ef80..ab3409a18f 100644
--- a/tutorials/configure-plex-s3/index.mdx
+++ b/tutorials/configure-plex-s3/index.mdx
@@ -1,7 +1,7 @@
---
meta:
title: Configuring Plex Media Server with Object Storage
- description: This page shows how to set up an s3 media server with Plex and Object Storage
+ description: This page shows how to set up a media server with Plex and Object Storage
content:
h1: Configuring Plex Media Server with Object Storage
paragraph: This page shows how to configure Plex media server with Object Storage
@@ -167,7 +167,7 @@ Plex is a client/server media player system comprising two main components:
- You can upload additional content to your server with any S3-compatible tool, like [Cyberduck](/tutorials/store-s3-cyberduck/).
+ You can upload additional content to your server with any Amazon S3-compatible tool, like [Cyberduck](/tutorials/store-s3-cyberduck/).
9. Click **Next** and then **Finish** to conclude the set-up.
10. Add media to your bucket and trigger a scan of your media folder in the Plex interface. Your media should display. If so, it is all set up. For more information about Plex, refer to their [official documentation](https://support.plex.tv/articles/).
\ No newline at end of file
diff --git a/tutorials/configure-tem-smtp-with-wordpress-plugin/index.mdx b/tutorials/configure-tem-smtp-with-wordpress-plugin/index.mdx
index eee4f3a561..a4cbe73b91 100644
--- a/tutorials/configure-tem-smtp-with-wordpress-plugin/index.mdx
+++ b/tutorials/configure-tem-smtp-with-wordpress-plugin/index.mdx
@@ -10,7 +10,7 @@ categories:
- transactional-email
- instances
dates:
- validation: 2024-04-24
+ validation: 2024-10-29
posted: 2024-04-24
---
@@ -51,7 +51,7 @@ dates:
2. Click the **Launch Setup Wizard** button to configure the plug-in. You are redirected to the WP Mail SMTP welcome page.
3. Click **Let's Get Started**.
4. Choose **Other SMTP**, then click **Save and Continue**.
-5. Enter `smtp.tem.scw.cloud` in the **SMTP Host** field.
+5. Enter `smtp.tem.scaleway.com` in the **SMTP Host** field.
6. Select **TLS** in the **Encryption** field.
7. In the **SMTP Port** enter either of the Transactional Email TLS connection ports: `465` or `2465`.
8. Switch on the **Enable Authentication** toggle.
diff --git a/tutorials/create-openwrt-image-for-scaleway/index.mdx b/tutorials/create-openwrt-image-for-scaleway/index.mdx
index ba1c864690..7dfaf1b604 100644
--- a/tutorials/create-openwrt-image-for-scaleway/index.mdx
+++ b/tutorials/create-openwrt-image-for-scaleway/index.mdx
@@ -292,7 +292,7 @@ In this tutorial, we do not set up cloud-init, but use the same magic IP mechani
## Import the image
-You can use the Scaleway console or your favorite S3 CLI to upload objects into a bucket.
+You can use the Scaleway console or your favorite Amazon S3-compatible CLI tool to upload objects into a bucket.
In this example, we use the [AWS CLI](/storage/object/api-cli/object-storage-aws-cli/).
diff --git a/tutorials/create-serverless-scraping/index.mdx b/tutorials/create-serverless-scraping/index.mdx
index 30fb27d7b6..b0eacb6a97 100644
--- a/tutorials/create-serverless-scraping/index.mdx
+++ b/tutorials/create-serverless-scraping/index.mdx
@@ -47,7 +47,7 @@ We start by creating the scraper program, or the "data producer".
SQS credentials and queue URL are read by the function from environment variables. Those variables are set by Terraform as explained in [one of the next sections](#create-a-terraform-file-to-provision-the-necessary-scaleway-resources). *If you choose another deployment method, such as the [console](https://console.scaleway.com/), do not forget to set them.*
```python
- queue_url = os.getenv('QUEUE_URL')
+ queue_url = os.getenv('QUEUE_URL')
sqs_access_key = os.getenv('SQS_ACCESS_KEY')
sqs_secret_access_key = os.getenv('SQS_SECRET_ACCESS_KEY')
```
@@ -65,10 +65,10 @@ We start by creating the scraper program, or the "data producer".
Using the AWS python sdk `boto3`, connect to the SQS queue and push the `title` and `url` of articles published less than 15 minutes ago.
```python
sqs = boto3.client(
- 'sqs',
- endpoint_url=SCW_SQS_URL,
- aws_access_key_id=sqs_access_key,
- aws_secret_access_key=sqs_secret_access_key,
+ 'sqs',
+ endpoint_url=SCW_SQS_URL,
+ aws_access_key_id=sqs_access_key,
+ aws_secret_access_key=sqs_secret_access_key,
region_name='fr-par')
for age, titleline in zip(ages, titlelines):
@@ -117,7 +117,7 @@ Next, let's create our consumer function. When receiving a message containing th
Lastly, we write the information into the database. *To keep the whole process completely automatic the* `CREATE_TABLE_IF_NOT_EXISTS` *query is run each time. If you integrate the functions into an existing database, there is no need for it.*
```python
conn = None
- try:
+ try:
conn = pg8000.native.Connection(host=db_host, database=db_name, port=db_port, user=db_user, password=db_password, timeout=15)
conn.run(CREATE_TABLE_IF_NOT_EXISTS)
@@ -136,7 +136,7 @@ As explained in the [Scaleway Functions documentation](/serverless/functions/how
## Create a Terraform file to provision the necessary Scaleway resources
-For the purposes of this tutorial, we show how to provision all resources via Terraform.
+For the purposes of this tutorial, we show how to provision all resources via Terraform.
If you do not want to use Terraform, you can also create the required resources via the [console](https://console.scaleway.com/), the [Scaleway API](https://www.scaleway.com/en/developers/api/), or any other [developer tool](https://www.scaleway.com/en/developers/). Remember that if you do so, you will need to set up environment variables for functions as previously specified. The following documentation may help create the required resources:
@@ -149,7 +149,7 @@ If you do not want to use Terraform, you can also create the required resources
1. Create a directory called `terraform` (at the same level as the `scraper` and `consumer` directories created in the previous steps).
2. Inside it, create a file called `main.tf`.
3. In the file you just created, add the code below to set up the [Scaleway Terraform provider](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs) and your Project:
- ```
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -167,7 +167,7 @@ If you do not want to use Terraform, you can also create the required resources
}
```
4. Still in the same file, add the code below to provision the SQS resources: SQS activation for the project, separate credentials with appropriate permissions for producer and consumer, and an SQS queue:
- ```
+ ```hcl
resource "scaleway_mnq_sqs" "main" {
project_id = scaleway_account_project.mnq_tutorial.id
}
@@ -202,7 +202,7 @@ If you do not want to use Terraform, you can also create the required resources
}
```
5. Add the code below to provision the Managed Database for PostgreSQL resources. Note that here we are creating a random password and using it for the default and worker user:
- ```
+ ```hcl
resource "random_password" "dev_mnq_pg_exporter_password" {
length = 16
special = true
@@ -219,7 +219,7 @@ If you do not want to use Terraform, you can also create the required resources
node_type = "db-dev-s"
engine = "PostgreSQL-15"
is_ha_cluster = false
- disable_backup = true
+ disable_backup = true
user_name = "mnq_initial_user"
password = random_password.dev_mnq_pg_exporter_password.result
}
@@ -240,7 +240,7 @@ If you do not want to use Terraform, you can also create the required resources
}
resource "scaleway_rdb_database" "main" {
- instance_id = scaleway_rdb_instance.main.id
+ instance_id = scaleway_rdb_instance.main.id
name = "hn-database"
}
@@ -252,14 +252,14 @@ If you do not want to use Terraform, you can also create the required resources
}
resource "scaleway_rdb_privilege" "mnq_user_role" {
- instance_id = scaleway_rdb_instance.main.id
+ instance_id = scaleway_rdb_instance.main.id
user_name = scaleway_rdb_user.worker.name
database_name = scaleway_rdb_database.main.name
permission = "all"
}
```
6. Add the code below to provision the functions resources. First, activate the namespace, then locally zip the code and create the functions in the cloud. Note that we are referencing variables from other resources, to completely automate the deployment process:
- ```
+ ```hcl
locals {
scraper_folder_path = "../scraper"
consumer_folder_path = "../consumer"
@@ -354,17 +354,17 @@ If you do not want to use Terraform, you can also create the required resources
}
}
```
- Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
-7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
- ```
+ Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
+7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
+ ```hcl
resource "scaleway_function_cron" "scraper_cron" {
- function_id = scaleway_function.scraper.id
+ function_id = scaleway_function.scraper.id
schedule = "0,15,30,45 * * * *"
args = jsonencode({})
}
resource "scaleway_function_trigger" "consumer_sqs_trigger" {
- function_id = scaleway_function.consumer.id
+ function_id = scaleway_function.consumer.id
name = "hn-sqs-trigger"
sqs {
project_id = scaleway_mnq_sqs.main.project_id
@@ -378,7 +378,7 @@ Terraform makes this very straightforward. To provision all the resources and ge
```
cd terraform
terraform init
-terraform plan
+terraform plan
terraform apply
```
@@ -386,14 +386,14 @@ terraform apply
Go to the [Scaleway console](https://console.scaleway.com/), and check the logs and metrics for Serverless Functions' execution and Messaging and Queuing SQS queue statistics.
-To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
+To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
Retrieve the instance IP and port of your Managed Database from the console, under the [Managed Database section](https://console.scaleway.com/rdb/instances).
Use the following command to connect to your database. When prompted for a password, you can find it by running `terraform output -json`.
```
psql -h --port -d hn-database -U worker
```
-When you are done testing, don't forget to clean up! To do so, run:
+When you are done testing, don't forget to clean up! To do so, run:
```
cd terraform
terraform destroy
@@ -405,7 +405,7 @@ We have shown how to asynchronously decouple the producer and the consumer using
While the volume of data processed in this example is quite small, thanks to the Messaging and Queuing SQS queue's robustness and the auto-scaling capabilities of the Serverless Functions, you can adapt this example to manage larger workloads.
Here are some possible extensions to this basic example:
- - Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
+ - Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
- Define multiple cron triggers for different websites and pass the website as an argument to the function. Or, create multiple functions that feed the same queue.
- - Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage](/storage/object/quickstart/) S3 bucket.
+ - Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage bucket](/storage/object/quickstart/).
- Replace the Managed Database for PostgreSQL with a [Scaleway Serverless Database](/serverless/sql-databases/quickstart/), so that all the infrastructure lives in the serverless ecosystem! *Note that at the moment there is no Terraform support for Serverless Database, hence the choice here to use Managed Database for PostgreSQL*.
\ No newline at end of file
diff --git a/tutorials/deploy-laravel-on-serverless-containers/index.mdx b/tutorials/deploy-laravel-on-serverless-containers/index.mdx
index ff3827cec5..9f1374f2d9 100644
--- a/tutorials/deploy-laravel-on-serverless-containers/index.mdx
+++ b/tutorials/deploy-laravel-on-serverless-containers/index.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This tutorial provides a step-by-step guide for deploying a containerized Laravel application on the Scaleway cloud platform.
tags: laravel php docker nginx fpm
hero: assets/scaleway-umami.webp
-categories:
+categories:
- containers
- container-registry
dates:
@@ -42,7 +42,7 @@ Laravel applications make use of [queues](https://laravel.com/docs/10.x/queues)
2. Create a queue. In this example, we create a `Standard` queue (At-least-once delivery, the order of messages is not preserved) with the default parameters. This queue will be the default queue used by our application.
-
+
3. Generate credentials. In this example, we generate the credentials with `read` and `write` access.
@@ -53,7 +53,7 @@ In this section, we will focus on building the containerized image. With Docker,
1. Create the Dockerfile: we create a `Dockerfile` which is a text file that contains instructions for Docker to build the image. In this example, we specify the base image as `php:fpm-alpine`, install and enable the necessary php dependencies with [`install-php-extensions`](https://github.com/mlocati/docker-php-extension-installer), and determine the commands to be executed at startup.
- ```
+ ```dockerfile
# Dockerfile
FROM --platform=linux/amd64 php:8.2.6-fpm-alpine3.18
@@ -84,9 +84,9 @@ In this section, we will focus on building the containerized image. With Docker,
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
2. Create the supervisor configuration file. [Supervisor](http://supervisord.org/) is a reliable and efficient process control system for managing and monitoring processes. This is used as multiple processes are running within the container. In this example, we create a `stubs/supervisor/supervisord.conf` file with the following configuration to start the web server Nginx, the php-fpm pool, and 5 workers:
- ```
+ ```conf
# stubs/supervisor/supervisord.conf
- [supervisord]
+ [supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
@@ -128,43 +128,43 @@ In this section, we will focus on building the containerized image. With Docker,
3. Create web server configuration files. Nginx will be used to serve the static assets and to forward the requests to the php-fpm pool for processing. In this example, we create the following configuration files `stubs/nginx/http.d/default.conf` and `stubs/nginx/nginx.conf`.
- ```
+ ```hcl
# stubs/nginx/http.d/default.conf
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/html/public;
-
+
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
-
+
index index.php;
-
+
charset utf-8;
-
+
location / {
try_files $uri $uri/ /index.php?$query_string;
}
-
+
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
-
+
error_page 404 /index.php;
-
+
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
-
+
location ~ /\.(?!well-known).* {
deny all;
}
}
```
- ```
+ ```hcl
# stubs/nginx/nginx.conf
error_log /var/log/nginx/error.log notice;
events {
@@ -183,11 +183,11 @@ In this section, we will focus on building the containerized image. With Docker,
pid /var/run/nginx.pid;
user nginx;
worker_processes auto;
- ```
+ ```
4. Create the php-fpm configuration file. The configuration `stubs/php/php-fpm.d/zz-docker.conf` file should be created, and the php-fpm pool configured to render the dynamic pages of the Laravel application. Depending on the needs of your application, you might have to fine-tune the configuration of the process manager. Further information is available in the [php manual](https://www.php.net/manual/en/install.fpm.configuration.php).
-
- ```
+
+ ```conf
[global]
daemonize = no
@@ -197,27 +197,27 @@ In this section, we will focus on building the containerized image. With Docker,
listen.group = www-data
listen.mode = 0660
- pm = dynamic
- pm.max_children = 75
- pm.start_servers = 10
- pm.min_spare_servers = 5
- pm.max_spare_servers = 20
+ pm = dynamic
+ pm.max_children = 75
+ pm.start_servers = 10
+ pm.min_spare_servers = 5
+ pm.max_spare_servers = 20
pm.process_idle_timeout = 10s
```
5. Build the docker image.
- ```
+ ```sh
docker build -t my-image .
```
-## Creating Container Registry
+## Creating Container Registry
1. [Create a Scaleway Container Registry namespace](/containers/container-registry/how-to/create-namespace/) in the `PAR` region. Set the visibility to `Private` to avoid having your container retrieved without proper authentication and authorization.
2. Run the following command in your local terminal to log in to the newly created Container Registry.
- ```
+ ```sh
docker login rg.fr-par.scw.cloud/namespace-zen-feistel -u nologin --password-stdin <<< "$SCW_SECRET_KEY"
```
@@ -226,8 +226,8 @@ In this section, we will focus on building the containerized image. With Docker,
3. Tag the image and push it to the Container Registry namespace.
-
- ```
+
+ ```sh
docker tag my-image rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1
docker push rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1
```
@@ -237,7 +237,7 @@ In this section, we will focus on building the containerized image. With Docker,
The Scaleway documentation website provides a Quickstart on how to [create and manage a Serverless Container Namespace](/serverless/containers/quickstart/).
1. Create a Serverless Containers namespace. In this example, we create the `my-laravel-application` namespace and configure the environment variables and secrets necessary for our application. In particular, we must add all the variables needed to connect to the previously created SQS/SNS queue.
-
+
By default, Laravel expects the following environment variables/secrets to be filled in for queues: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`, `QUEUE_CONNECTION`, `SQS_PREFIX` and `SQS_QUEUE`.
2. Deploy the application. Click **+ Deploy a Container** once the namespace is created, and follow the instructions of the creation wizard. Select the registry namespace and the previously uploaded Docker image and configure the listening port (the Nginx web server is listening on port 80). For the CPU and memory, define at least 560mVPCU and 256 MB respectively. To reduce the limitations due to [cold start](/serverless/containers/concepts/#cold-start), we will run at least 1 instance.
@@ -274,7 +274,7 @@ By default, some metrics will be available in the Scaleway console. However, to
To test the load on the application, there is a basic test route that pushes a job into the queue and returns the welcome page.
-``` php
+```php
# routes/web.php
use App\Jobs\ProcessPodcast;
@@ -287,7 +287,7 @@ Route::get('/test', function () {
```
The job does nothing but wait for a couple of seconds.
-``` php
+```php
# app/Jobs/ProcessPodcast
class ProcessPodcast implements ShouldQueue
@@ -300,11 +300,11 @@ class ProcessPodcast implements ShouldQueue
```
Then, use `hey` to send 400 requests (20 concurrent requests) to this route.
-```
+```sh
hey -n 400 -q 20 https://example.com/test
```
-We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
+We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
```
Response time histogram:
diff --git a/tutorials/deploy-nextcloud-s3/index.mdx b/tutorials/deploy-nextcloud-s3/index.mdx
index 83c1be5d02..ed32715dc8 100644
--- a/tutorials/deploy-nextcloud-s3/index.mdx
+++ b/tutorials/deploy-nextcloud-s3/index.mdx
@@ -143,7 +143,7 @@ NextCloud can use Object Storage as primary storage. This gives you the possibil
```
nano /var/www/nextcloud/config/config.php
```
-3. Add a configuration block for S3-compatible storage, as follows:
+3. Add a configuration block for Amazon S3-compatible storage, as follows:
```
'objectstore' => array(
'class' => '\\OC\\Files\\ObjectStore\\S3',
diff --git a/tutorials/deploy-penpot-with-docker-instantapp/index.mdx b/tutorials/deploy-penpot-with-docker-instantapp/index.mdx
index 805199637b..e9c6f3f81a 100644
--- a/tutorials/deploy-penpot-with-docker-instantapp/index.mdx
+++ b/tutorials/deploy-penpot-with-docker-instantapp/index.mdx
@@ -9,7 +9,7 @@ tags: penpot docker instantapp
categories:
- instances
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2022-09-28
hero: assets/scaleway-penpot.webp
---
diff --git a/tutorials/deploy-saas-application/index.mdx b/tutorials/deploy-saas-application/index.mdx
index a7e1cf1ae8..bdce57cccf 100644
--- a/tutorials/deploy-saas-application/index.mdx
+++ b/tutorials/deploy-saas-application/index.mdx
@@ -41,7 +41,7 @@ You will learn how to store environment variables with Kubernetes secrets and us
In all applications, you have to define settings, usually based on environment variables, so that you can adapt the behavior of your application depending on their values. Having used Django to create your SaaS application, the settings you need can be found in a file called `settings.py`. In the following steps, we will modify `settings.py` to connect our private Object Storage bucket to our application. As noted in the requirements for this tutorial, you should have already [created a private Object Storage bucket](/storage/object/how-to/create-a-bucket/) before continuing.
-1. Take a look at your Django application's `settings.py` file. Natively, Django does not manage the S3 protocol for storing static files, and it will provide you with a basic configuration at the end of this file:
+1. Take a look at your Django application's `settings.py` file. Natively, Django does not manage the Amazon S3 protocol for storing static files, and it will provide you with a basic configuration at the end of this file:
```
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
@@ -91,13 +91,13 @@ In all applications, you have to define settings, usually based on environment v
- `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are the [access key and secret key for your Scaleway account](/identity-and-access-management/iam/how-to/create-api-keys/)
- `AWS_STORAGE_BUCKET_NAME` is the name you gave your [Object Storage bucket](/storage/object/how-to/create-a-bucket/), e.g. `my_awesome_bucket`
- `AWS_S3_REGION_NAME` is the region/zone of your Object Storage Bucket
- - `AWS_S3_HOST` and `AWS_S3_ENDPOINT_URL` are the URLs needed to access your S3 bucket. They are composed of the previously defined variables.
- - `AWS_LOCATION` is the folder that will be created in our S3 bucket for our static files
+ - `AWS_S3_HOST` and `AWS_S3_ENDPOINT_URL` are the URLs needed to access your Object Storage bucket. They are composed of the previously defined variables.
+ - `AWS_LOCATION` is the folder that will be created in our Object Storage bucket for our static files
- `STATIC_URL` has changed
- - `STATICFILES_STORAGE` defines the new storage class that we want to use, here standard S3 protocol storage. We now need to give values to our environment values, so that they can be correctly found by `settings.py` via `os.getenv('MY_VAR_NAME')`.
+ - `STATICFILES_STORAGE` defines the new storage class that we want to use, here standard Amazon S3 protocol storage. We now need to give values to our environment values, so that they can be correctly found by `settings.py` via `os.getenv('MY_VAR_NAME')`.
- Remember that S3 is a standard protocol. Even though the `boto3` library asks us to prefix variables with `AWS`, it nonetheless works perfectly with Scaleway Object Storage.
+ Remember that Amazon S3 is a standard protocol. Even though the `boto3` library asks us to prefix variables with `AWS`, it nonetheless works perfectly with Scaleway Object Storage.
Even though we added a lot of lines to `settings.py`, only four environment variables are ultimately needed to use our Object Storage bucket: `ACCESS_KEY_ID`, `SECRET_ACCESS_KEY`, `AWS_S3_REGION_NAME` (eg `nl-ams`) and `AWS_STORAGE_BUCKET_NAME`. These variables are called using `os.getenv('MY_VAR_NAME')` so we now need to set these values.
diff --git a/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx b/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx
index 264a034efc..3adcec84f0 100644
--- a/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx
+++ b/tutorials/deploying-a-documentation-website-with-docusaurus-on-scaleway/index.mdx
@@ -107,7 +107,7 @@ Docusaurus is available for most operating systems. In this tutorial, we describ
9. Click **Skip this and set up a workflow yourself**.
10. Copy the following code in the text editor, keep the default file name `main.yml` and click **Start commit**:
```
- name: Deploy Docusaurus to S3
+ name: Deploy Docusaurus to Object Storage
on:
push:
branches:
diff --git a/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx b/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx
index ed7a98b684..1495b57a7f 100644
--- a/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx
+++ b/tutorials/deploying-qdrant-vectordb-kubernetes/index.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This page explains how to deploy Qdrant Hybrid Cloud on Scaleway Kubernetes Kapsule.
tags: vectordb qdrant database
dates:
- validation: 2024-04-16
+ validation: 2024-10-21
posted: 2024-04-16
categories:
- kubernetes
@@ -21,10 +21,10 @@ Qdrant Hybrid Cloud on Scaleway offers a secure and scalable solution that meets
Key benefits of running Qdrant Hybrid Cloud on Scaleway include:
-- **AI-Focused resources:** Scaleway provides dedicated resources and infrastructure tailored for AI and machine learning workloads, complementing Qdrant Hybrid Cloud to empower advanced AI applications.
-- **Scalable vector search:** Qdrant Hybrid Cloud's fully managed vector database facilitates seamless scaling, whether vertically or horizontally. Deployed on Scaleway, it ensures robust scalability for projects of any scale, from startups to enterprises.
-- **European roots and focus:** Scaleway's presence in Europe aligns well with Qdrant's European roots, offering local expertise and infrastructure that adhere to European regulatory standards.
-- **Sustainability commitment:** Scaleway focuses on sustainability with eco-conscious data centers and an extended hardware lifecycle, reducing the environmental impact.
+- AI-Focused resources: Scaleway provides dedicated resources and infrastructure tailored for AI and machine learning workloads, complementing Qdrant Hybrid Cloud to empower advanced AI applications.
+- Scalable vector search: Qdrant Hybrid Cloud's fully managed vector database facilitates seamless scaling, whether vertically or horizontally. Deployed on Scaleway, it ensures robust scalability for projects of any scale, from startups to enterprises.
+- European roots and focus: Scaleway's presence in Europe aligns well with Qdrant's European roots, offering local expertise and infrastructure that adhere to European regulatory standards.
+- Sustainability commitment: Scaleway focuses on sustainability with eco-conscious data centers and an extended hardware lifecycle, reducing the environmental impact.
@@ -36,8 +36,8 @@ Key benefits of running Qdrant Hybrid Cloud on Scaleway include:
Setting up Qdrant Hybrid Cloud on Scaleway is straightforward, thanks to its Kubernetes-native architecture.
-1. **Activate Hybrid Cloud:** Log into your Qdrant account and activate **Hybrid Cloud**.
-2. **Integrate your clusters:** Add your Scaleway Kubernetes clusters as a private region in the Hybrid Cloud settings.
-3. **Simplified Management:** Use the Qdrant Management Console for seamless creation and oversight of Qdrant clusters on Scaleway.
+1. Log into your Qdrant account and activate **Hybrid Cloud**.
+2. Add your Scaleway Kubernetes clusters as a private region in the Hybrid Cloud settings.
+3. Use the Qdrant Management Console for seamless creation and oversight of Qdrant clusters on Scaleway.
For detailed deployment instructions on how to build a RAG system that combines blog content ingestion with the capabilities of semantic search, refer to the [official Qdrant on Scaleway documentation](https://qdrant.tech/documentation/examples/rag-chatbot-scaleway/) or the [Qdrant product documentation](https://qdrant.tech/documentation/).
\ No newline at end of file
diff --git a/tutorials/encode-videos-using-serverless-jobs/index.mdx b/tutorials/encode-videos-using-serverless-jobs/index.mdx
index 3f68a4d1b4..d2cb14f1be 100644
--- a/tutorials/encode-videos-using-serverless-jobs/index.mdx
+++ b/tutorials/encode-videos-using-serverless-jobs/index.mdx
@@ -15,27 +15,27 @@ dates:
posted: 2024-05-15
---
-This tutorial demonstrates the process of encoding videos retrieved from Object Storage using Serverless Jobs: media encoding is a resource-intensive task over prolonged durations, making it suitable for Serverless Jobs. The job takes a video file as its input, encodes it using a Docker image based on [FFMPEG](https://ffmpeg.org/), then uploads the encoded video back to the S3 bucket.
+This tutorial demonstrates the process of encoding videos retrieved from Object Storage using Serverless Jobs: media encoding is a resource-intensive task over prolonged durations, making it suitable for Serverless Jobs. The job takes a video file as its input, encodes it using a Docker image based on [FFMPEG](https://ffmpeg.org/), then uploads the encoded video back to the Object Storage bucket.
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
-- An [Object Storage bucket](/storage/object/how-to/create-a-bucket/)
+- An [Object Storage bucket](/storage/object/how-to/create-a-bucket/)
- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/)
- Installed [Docker engine](https://docs.docker.com/engine/install/)
## Creating the job image
-The initial step involves defining a Docker image for interacting with the S3 Object Storage using [MinIO](https://min.io/) and performing a video encoding task using [FFMPEG](https://ffmpeg.org/).
+The initial step involves defining a Docker image for interacting with the Object Storage using [MinIO](https://min.io/) and performing a video encoding task using [FFMPEG](https://ffmpeg.org/).
1. Create a bash script `encode.sh` with the following content:
```bash
#!/bin/sh
set -e
- echo "Configuring S3 access for MinIO"
+ echo "Configuring Object Storage access for MinIO"
mc config host add scw "https://$JOB_S3_ENDPOINT/" "$JOB_S3_ACCESS_KEY" "$JOB_S3_SECRET_KEY"
echo "Downloading the file from S3"
@@ -48,7 +48,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob
mc cp "/tmp/$JOB_OUTPUT_FILENAME" "scw/$JOB_OUTPUT_PATH/$JOB_OUTPUT_FILENAME"
```
- That bash script downloads a video from an S3 bucket, encodes that video using FFMPEG, and then uploads the encoded video into the bucket, by leveraging a couple of environment variables which will be detailed in the following sections.
+ That bash script downloads a video from an Object Storage bucket, encodes that video using FFMPEG, and then uploads the encoded video into the bucket, by leveraging a couple of environment variables which will be detailed in the following sections.
For illustration purposes, this script encodes a video using the x264 video codec and the AAC audio codec. Encoding settings can be modified using command-line parameters to FFMPEG.
@@ -58,7 +58,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob
```dockerfile
FROM linuxserver/ffmpeg:amd64-latest
- # Install the MinIO S3 client
+ # Install the MinIO client
RUN curl https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/local/bin/mc
RUN chmod +x /usr/local/bin/mc
@@ -69,10 +69,10 @@ The initial step involves defining a Docker image for interacting with the S3 Ob
ENTRYPOINT /encode.sh
```
- This Dockerfile uses `linuxserver/ffmpeg` as a base image bundled with FFMPEG along with a variety of encoding codecs and installs [MinIO](https://min.io/) as a command-line S3 client to copy files over Object Storage.
+ This Dockerfile uses `linuxserver/ffmpeg` as a base image bundled with FFMPEG along with a variety of encoding codecs and installs [MinIO](https://min.io/) as a command-line client to copy files over Object Storage.
3. Build and [push the image](/containers/container-registry/how-to/push-images/) to your Container Registry:
- ```
+ ```bash
docker build . -t
docker push
```
@@ -94,7 +94,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob
4. Toggle the **Advanced options** section and add 3 environment variables:
- - `JOB_S3_ENDPOINT` is your S3 endpoint (e.g. `s3.nl-ams.scw.cloud`).
+ - `JOB_S3_ENDPOINT` is your Object Storage endpoint (e.g. `s3.nl-ams.scw.cloud`).
- `JOB_S3_ACCESS_KEY` is your API access key.
- `JOB_S3_SECRET_KEY` is your API secret key.
@@ -104,14 +104,14 @@ The initial step involves defining a Docker image for interacting with the S3 Ob
## Triggering the serverless job
-Ensure that your S3 bucket contains at least one video that can be encoded.
+Ensure that your Object Storage bucket contains at least one video that can be encoded.
1. In the Scaleway Console, go to **Serverless Jobs** and click on the name of your job. The job **Overview** tab displays.
2. Click the **Actions** button, then click **Run job with options** in the drop-down menu.
3. Add 4 environment variables:
- - `JOB_INPUT_PATH` is the folder containing the video to encode, including your S3 bucket name.
+ - `JOB_INPUT_PATH` is the folder containing the video to encode, including your Object Storage bucket name.
- `JOB_INPUT_FILENAME` is the file name of the video to encode, including the file extension.
- - `JOB_OUTPUT_PATH` is the folder containing the encoded video that will be uploaded, including your S3 bucket name.
+ - `JOB_OUTPUT_PATH` is the folder containing the encoded video that will be uploaded, including your Object Storage bucket name.
- `JOB_OUTPUT_FILENAME` is the file name of the encoded video that will be uploaded.
@@ -120,12 +120,12 @@ Ensure that your S3 bucket contains at least one video that can be encoded.
The progress and details for your Job run can be viewed in the **Job runs** section of the job **Overview** tab in the [Scaleway console](https://console.scaleway.com). You can also access the detailed logs of your job in [Cockpit](/observability/cockpit/quickstart/).
-Once the run status is **Succeeded**, the encoded video can be found in your S3 bucket under the folder and file name specified above in the environment variables.
+Once the run status is **Succeeded**, the encoded video can be found in your Object Storage bucket under the folder and file name specified above in the environment variables.
Your job can also be triggered through the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-jobs/#path-job-definitions-run-an-existing-job-definition-by-its-unique-identifier-this-will-create-a-new-job-run) using the same environment variables:
-```
+```bash
curl -X POST \
-H "X-Auth-Token: " \
-H "Content-Type: application/json" \
diff --git a/tutorials/encrypt-s3-data-rclone/index.mdx b/tutorials/encrypt-s3-data-rclone/index.mdx
index 21cca69e15..5b79243e56 100644
--- a/tutorials/encrypt-s3-data-rclone/index.mdx
+++ b/tutorials/encrypt-s3-data-rclone/index.mdx
@@ -7,7 +7,7 @@ content:
paragraph: In this tutorial, you will learn how to encrypt your data using Rclone before uploading it to Scaleway Object Storage.
categories:
- object-storage
-tags: encryption s3 rclone
+tags: encryption amazon-s3 rclone
dates:
validation: 2024-09-16
posted: 2020-06-10
@@ -19,7 +19,7 @@ Offering virtual backends, Rclone facilitates encryption, caching, chunking, and
Compatible with Windows, macOS X, and various Linux distributions, Rclone addresses a wide user base seeking efficient file management solutions.
-In this tutorial, we will explore the capabilities of the **Rclone crypt** module, which empowers users to encrypt their data seamlessly before transmitting it to Scaleway Object Storage via the S3 protocol.
+In this tutorial, we will explore the capabilities of the **Rclone crypt** module, which empowers users to encrypt their data seamlessly before transmitting it to Scaleway Object Storage via the Amazon S3 protocol.
@@ -65,13 +65,13 @@ brew install rclone
sudo mandb
```
-## Configuring an S3 remote endpoint
+## Configuring an Object Storage remote endpoint
You need to have your [API key](/identity-and-access-management/iam/how-to/create-api-keys/) ready for the `rclone` configuration.
-Before encrypting your data, create a new remote S3 endpoint in Rclone using the `rclone config` command:
+Before encrypting your data, create a new remote Object Storage endpoint in Rclone using the `rclone config` command:
```
No remotes found - make a new one
@@ -187,7 +187,7 @@ e/n/d/r/c/s/q> q
`rclone crypt` will use the previously configured endpoint to store the encrypted files. Configure it by running `rclone config` again.
-In the config below we define the Object Storage bucket at the `remote` prompt. In our example, we use our S3 endpoint `scaleway` with the bucket `myobjectstoragebucket`.
+In the config below we define the Object Storage bucket at the `remote` prompt. In our example, we use our Object Storage endpoint `scaleway` with the bucket `myobjectstoragebucket`.
Edit these values towards your configuration. A long passphrase is recommended for security reasons, or you can use a random one.
diff --git a/tutorials/getting-started-with-kops-on-scaleway/index.mdx b/tutorials/getting-started-with-kops-on-scaleway/index.mdx
index 3554068e73..640c656eeb 100644
--- a/tutorials/getting-started-with-kops-on-scaleway/index.mdx
+++ b/tutorials/getting-started-with-kops-on-scaleway/index.mdx
@@ -41,11 +41,11 @@ export SCW_SECRET_KEY="my-secret-key"
export SCW_DEFAULT_PROJECT_ID="my-project-id"
# Configure the bucket name to store kops state
export KOPS_STATE_STORE=scw:// # where is the name of the bucket you set earlier
-# Scaleway Object Storage is S3 compatible so we just override some S3 configurations to talk to our bucket
+# Scaleway Object Storage is Amazon S3-compatible so we just override some configurations to talk to our bucket
export S3_REGION=fr-par # or another scaleway region providing Object Storage
export S3_ENDPOINT=s3.$S3_REGION.scw.cloud # define provider endpoint
-export S3_ACCESS_KEY_ID="my-access-key" # where is the S3 API access key for your bucket
-export S3_SECRET_ACCESS_KEY="my-secret-key" # where is the S3 API secret key for your bucket
+export S3_ACCESS_KEY_ID="my-access-key" # where is the API access key for your bucket
+export S3_SECRET_ACCESS_KEY="my-secret-key" # where is the API secret key for your bucket
# this is required since Scaleway support is currently in alpha so it is feature gated
export KOPS_FEATURE_FLAGS="Scaleway"
```
diff --git a/tutorials/how-to-implement-rag-generativeapis/index.mdx b/tutorials/how-to-implement-rag-generativeapis/index.mdx
index 570689a1fc..053bee87d2 100644
--- a/tutorials/how-to-implement-rag-generativeapis/index.mdx
+++ b/tutorials/how-to-implement-rag-generativeapis/index.mdx
@@ -59,11 +59,11 @@ Create a .env file and add the following variables. These will store your API ke
SCW_DB_HOST=your_scaleway_managed_db_host # The IP address of your database instance
SCW_DB_PORT=your_scaleway_managed_db_port # The port number for your database instance
- # Scaleway S3 bucket configuration
+ # Scaleway Object Storage bucket configuration
## Will be used to store your proprietary data (PDF, CSV etc)
SCW_BUCKET_NAME=your_scaleway_bucket_name
SCW_REGION=fr-par
- SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # S3 main endpoint, e.g., https://s3.fr-par.scw.cloud
+ SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # Object Storage main endpoint, e.g., https://s3.fr-par.scw.cloud
# Scaleway Generative APIs endpoint
## LLM and Embedding model are served through this base URL
@@ -196,7 +196,7 @@ page_iterator = paginator.paginate(Bucket=os.getenv("SCW_BUCKET_NAME", ""))
In this code sample, we:
- Set up a Boto3 session: we initialize a Boto3 session, which is the AWS SDK for Python, fully compatible with Scaleway Object Storage. This session manages configuration, including credentials and settings, that Boto3 uses for API requests.
-- Create an S3 client: we establish an S3 client to interact with the Scaleway Object Storage service.
+- Create an Amazon S3 client: we establish an Amazon client to interact with the Scaleway Object Storage service.
- Set up pagination for listing objects: we prepare pagination to handle potentially large lists of objects efficiently.
- Iterate through the bucket: this initiates the pagination process, allowing us to list all objects within the specified Scaleway Object bucket seamlessly.
diff --git a/tutorials/how-to-implement-rag/index.mdx b/tutorials/how-to-implement-rag/index.mdx
index d6197c4d74..2512c9b477 100644
--- a/tutorials/how-to-implement-rag/index.mdx
+++ b/tutorials/how-to-implement-rag/index.mdx
@@ -59,9 +59,9 @@ Create a .env file and add the following variables. These will store your API ke
SCW_DB_HOST=your_scaleway_managed_db_host # The IP address of your database instance
SCW_DB_PORT=your_scaleway_managed_db_port # The port number for your database instance
- # Scaleway S3 bucket configuration
+ # Scaleway Object Storage bucket configuration
SCW_BUCKET_NAME=your_scaleway_bucket_name
- SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # S3 endpoint, e.g., https://s3.fr-par.scw.cloud
+ SCW_BUCKET_ENDPOINT="https://s3.{{SCW_REGION}}.scw.cloud" # Object Storage endpoint, e.g., https://s3.fr-par.scw.cloud
# Scaleway Inference API configuration (Embeddings)
SCW_INFERENCE_EMBEDDINGS_ENDPOINT="https://{{SCW_INFERENCE_EMBEDDINGS_DEPLOYMENT_ID}}.ifr.fr-par.scaleway.com/v1" # Endpoint for sentence-transformers/sentence-t5-xxl deployment
@@ -207,7 +207,7 @@ page_iterator = paginator.paginate(Bucket=BUCKET_NAME)
In this code sample we:
- Set up a Boto3 session: We initialize a Boto3 session, which is the AWS SDK for Python, fully compatible with Scaleway Object Storage. This session manages configuration, including credentials and settings, that Boto3 uses for API requests.
-- Create an S3 client: We establish an S3 client to interact with the Scaleway Object Storage service.
+- Create an Amazon S3 client: We establish an Amazon S3 client to interact with the Scaleway Object Storage service.
- Set up pagination for listing objects: We prepare pagination to handle potentially large lists of objects efficiently.
- Iterate through the bucket: This initiates the pagination process, allowing us to list all objects within the specified Scaleway Object bucket seamlessly.
diff --git a/tutorials/install-github-actions-runner-mac/index.mdx b/tutorials/install-github-actions-runner-mac/index.mdx
index 2805a32abe..2dce0c7c1d 100644
--- a/tutorials/install-github-actions-runner-mac/index.mdx
+++ b/tutorials/install-github-actions-runner-mac/index.mdx
@@ -9,17 +9,10 @@ tags: mac m1 github-actions ci/cd apple-silicon self-hosted-runner
categories:
- apple-silicon
dates:
- validation: 2024-07-17
+ validation: 2024-10-24
posted: 2024-01-31
---
-
-
GitHub Actions is a powerful CI/CD platform that allows users to automate their software development workflows, connected to a GitHub organization or repository. While GitHub offers online runners with a pay-as-you-go model, self-hosted runners provide increased control and customization for your CI/CD setup. This tutorial guides you through setting up, configuring, and connecting a self-hosted runner on a Mac mini to execute macOS pipelines.
diff --git a/tutorials/k8s-fluentbit-observability/assets/grafana-node-exporter-dashboard.webp b/tutorials/k8s-fluentbit-observability/assets/grafana-node-exporter-dashboard.webp
deleted file mode 100644
index 7d90e4c0e4..0000000000
Binary files a/tutorials/k8s-fluentbit-observability/assets/grafana-node-exporter-dashboard.webp and /dev/null differ
diff --git a/tutorials/k8s-fluentbit-observability/assets/scaleway-cockpit-token-permissions.webp b/tutorials/k8s-fluentbit-observability/assets/scaleway-cockpit-token-permissions.webp
deleted file mode 100644
index 1ea2e4b0b9..0000000000
Binary files a/tutorials/k8s-fluentbit-observability/assets/scaleway-cockpit-token-permissions.webp and /dev/null differ
diff --git a/tutorials/k8s-fluentbit-observability/index.mdx b/tutorials/k8s-fluentbit-observability/index.mdx
deleted file mode 100644
index d3ef96e669..0000000000
--- a/tutorials/k8s-fluentbit-observability/index.mdx
+++ /dev/null
@@ -1,240 +0,0 @@
----
-meta:
- title: Send Kapsule logs and metrics to the Observability Cockpit with Fluent Bit
- description: Learn to configure Fluent Bit on a Kapsule cluster to forward logs and metrics to the Observability Cockpit for Grafana visualization.
-content:
- h1: Send Kapsule logs and metrics to the Observability Cockpit with Fluent Bit
- paragraph: Learn to configure Fluent Bit on a Kapsule cluster to forward logs and metrics to the Observability Cockpit for Grafana visualization.
-tags: fluentbit grafana kubernetes metrics logs
-categories:
- - cockpit
- - kubernetes
-dates:
- validation: 2023-06-17
- posted: 2023-06-01
----
-
-In this tutorial you will learn how to forward the applicative logs and the usage metrics of your [Kubernetes Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) containers into the [Observability Cockpit](/observability/cockpit/quickstart/).
-
-This process will be done using Fluent Bit, a lightweight logs and metrics processor that acts as a gateway between containers and the Cockpit endpoints, when configured in a Kubernetes cluster.
-
-
-
-
-
-- A Scaleway account logged into the [console](https://console.scaleway.com)
-- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
-- [Retrieved your Grafana credentials](/observability/cockpit/how-to/retrieve-grafana-credentials/)
-- [Created a Kapsule cluster](/containers/kubernetes/how-to/create-cluster/)
-- Set up [kubectl](/containers/kubernetes/how-to/connect-cluster-kubectl/) on your machine
-- Installed `helm`, the Kubernetes [package manager](https://helm.sh/), on your local machine (version 3.2+)
-
-
- - Having the default configuration on your agents might lead to more of your resources' metrics being sent, a high consumption, and a high bill at the end of the month.
- - Sending metrics and logs for Scaleway resources or personal data using an external path is a billable feature. In addition, any data that you push yourself is billed, even if you send data from Scaleway products. Refer to the [product pricing](https://www.scaleway.com/en/pricing/?tags=available,managedservices-observability-cockpit) for more information.
-
-
-## Configuring the Fluent Bit service
-
-Fluent Bit will be installed as a Helm package configured to target your Kubernetes resources as inputs and your Observability cockpit as an output.
-
-1. Add the Helm repository for Fluent Bit to your machine:
-
- ```bash
- helm repo add fluent https://fluent.github.io/helm-charts
- helm repo update
- ```
-
-2. Create a [values file for Helm](https://helm.sh/docs/chart_template_guide/values_files/) named `values.yaml` that we will use to configure Fluent Bit.
-3. Create a first section `config.service` in the `values.yaml` file to configure the Fluent Bit master process:
-
- ```yaml
- config:
- service: |
- [SERVICE]
- Flush 1
- Log_level info
- Daemon off
- Parsers_File custom_parsers.conf
- HTTP_Server on
- HTTP_Listen 0.0.0.0
- HTTP_PORT 2020
- ```
-
-- `Flush 1`: Collects logs every second.
-- `Log_level info`: Displays informational logs in the Fluent Bit pods.
-- `Daemon off`: Run Fluent Bit as the foreground process in its pods.
-- `Parsers_File custom_parsers.conf`: Loads additional log parsers that we will define later on.
-- `HTTP_Server on`: Enables Fluent Bit's built-in HTTP server.
-- `HTTP_Listen 0.0.0.0`: Listen on all interfaces exposed by your pod.
-- `HTTP_PORT 2020`: Listen to port 2020.
-
-
- You need to enable Fluent Bit's HTTP server for it to communicate with your Cockpit.
-
-
-## Configuring observability inputs
-
-We will configure Fluent Bit to retrieve the metrics (e.g.: CPU, memory, disk usage) from your Kubernetes nodes and the applicative logs from your running pods.
-
-Create a new section `config.inputs` in the `values.yaml` file:
-
-```yaml
- inputs: |
- [INPUT]
- Name node_exporter_metrics
- Tag node_metrics
- Scrape_interval 60
- [INPUT]
- Name tail
- Path /var/log/containers/*.log
- Parser docker
- Tag logs.*
-```
-
-The first subsection adds an input to Fluent Bit to retrieve the usage metrics from your containers:
-- `Name node_exporter_metrics`: This input plugin is used to collect various system-level metrics from your nodes.
-- `Tag node_metrics`: The `Tag` parameter assigns a tag to the incoming data from the `node_exporter_metrics` plugin. In this case, the tag `node_metrics` is assigned to the collected metrics.
-- `Scrape_interval 60`: The frequency at which metrics are retrieved. Metrics are collected every 60 seconds.
-
-
- Increasing the scrape interval allows you to push fewer metrics samples per minute to your Cockpit and thus, pay less.
- For instance, if your application exposes 100 metrics every 60 seconds, these 100 metrics are collected and pushed to the server. If you configure your scrape interval to 1 second, you will push 6000 samples per minute.
-
-
-The second subsection adds an input to Fluent Bit to retrieve the logs from your containers:
-- `Name tail`: The tail input plugin is used to read logs from files.
-- `Path /var/log/containers/*.log`: The tail plugin reads logs from `/var/log/containers/*.log` which are the log dumps from your containers.
-- `Parser docker`: The `Parser` parameter specifies the parser to be used for parsing log records. The `docker` parser is a custom parser that will be defined below.
-- `Tag logs.*`: The `Tag` parameter assigns a tag to the incoming data from the tail plugin. The tag "logs.*" indicates that the collected logs will have a tag prefix of "logs" followed by any additional subtag.
-
-## Configuring logs processing
-
-The inputs collected by Fluent Bit should be structured before sending them to the Cockpit to enable further filtering and better visualization.
-
-1. Create a `config.customParsers` section to define the `docker` parser which is referenced by the log parsing input:
-
- ```yaml
- customParsers: |
- [PARSER]
- Name docker
- Format json
- Time_Key time
- Time_Format %Y-%m-%dT%H:%M:%S.%L
- ```
-
- This parser expects log records in JSON format. It assumes that the timestamp information is located under the key "time" in the JSON log record, and that the timestamp format is in ISO 8601 date format.
-
-2. Define a section named `config.filters` to filter incoming log files from the containers:
-
- ```yaml
- filters: |
- [FILTER]
- Name kubernetes
- Match logs.*
- Merge_Log on
- Keep_Log off
- K8S-Logging.Parser on
- K8S-Logging.Exclude on
- ```
-
- This sets up a filter plugin which will be applied to log records with tags starting with `logs.`. It enables log merging, extracts and parses Kubernetes log metadata, and allows log exclusion based on Kubernetes log metadata filters.
-
-3. Define a section named `config.extraFiles.'labelmap.json'`:
-
- ```yaml
- extraFiles:
- labelmap.json: |
- {
- "kubernetes": {
- "container_name": "container",
- "host": "node",
- "labels": {
- "app": "app",
- "release": "release"
- },
- "namespace_name": "namespace",
- "pod_name": "instance"
- },
- "stream": "stream"
- }
- ```
-
- This defines a map for various Kubernetes labels and metadata to specific Fluent Bit field names to parse and structure the logs.
-
-## Configuring observability outputs
-
-The last step in the Fluent Bit configuration is to define where the logs and metrics will be pushed.
-
-1. [Create a token](/observability/cockpit/how-to/create-token/) and select push permissions for both logs and metrics.
-
-
-
-2. Create a section named `config.outputs` in the `values.yaml` file:
-
- ```yaml
- outputs: |
- [OUTPUT]
- Name prometheus_remote_write
- Match node_metrics
- Host <...>
- Port 443
- Uri /api/v1/push
- Header Authorization Bearer <...>
- Log_response_payload false
- Tls on
- Tls.verify on
- Add_label job kapsule-metrics
- [OUTPUT]
- Match logs.*
- Name loki
- Host <...>
- Port 443
- Tls on
- Tls.verify on
- Label_map_path /fluent-bit/etc/labelmap.json
- Auto_kubernetes_labels on
- Http_user nologin
- Http_passwd <...>
- ```
-
-3. Fill in the blanks as follows:
-- `Host` from the first subsection: paste your Metrics API URL defined in the **API and Tokens tab** section from the Cockpit. Remove the `https://` protocol.
-- `Header`: Next to `Bearer`, paste the token generated in the previous step.
-- `Host` from the second subsection: paste your Logs API URL defined in the **API and Tokens tab** section from the Cockpit. Remove the `https://` protocol.
-- `Http_passwd`: paste the token generated in the previous step.
-
-In the first subsection, the `prometheus_remote_write` plugin is used to send metrics to the [Prometheus](https://prometheus.io/) server of your Cockpit using the remote write protocol.
-In the second subsection, the `loki` plugin is used to send logs to the [Loki](https://grafana.com/oss/loki/) server of your Cockpit, using the field mapping from `labelmap.json` defined above.
-
-## Installing Fluent Bit
-
-Run the following command in the same directory as your `values.yaml` file to install Fluent Bit:
-
-```
-helm upgrade --install fluent-bit fluent/fluent-bit -f ./values.yaml
-```
-
-You should see a `DeamonSet` named `fluent-bit` with running pods on all of your nodes.
-
-## Visualizing Kapsule logs and metrics
-
-You can find the logs and metrics from your Kubernetes cluster in your Cockpit's [dashboard in Grafana](/observability/cockpit/how-to/access-grafana-and-managed-dashboards/).
-
-### Exploring metrics
-
-Grafana has a built-in dashboard for visualizing node metrics.
-
-1. Go to **Dashboards** in your Grafana instance.
-2. Click **New**, **Folder** and name it `Kapsule`.
-3. Click **New**, **Import** and paste the following URL in the **Import via grafana.com** field:
- ```
- https://grafana.com/grafana/dashboards/1860-node-exporter-full/
- ```
-4. Click **Load** to access the new dashboard named **Node Exporter Server Metrics**.
-
-
-
-### Exploring logs
-
-Your Kapsule logs index can be queried in the **Explore** section of your Cockpit's dashboard in Grafana. In the data source selector, pick the **Logs** index. The Kubernetes labels are already mapped and can be used as filters in queries.
\ No newline at end of file
diff --git a/tutorials/k8s-kapsule-multi-az/index.mdx b/tutorials/k8s-kapsule-multi-az/index.mdx
index af7cfe2dab..7b61a9df08 100644
--- a/tutorials/k8s-kapsule-multi-az/index.mdx
+++ b/tutorials/k8s-kapsule-multi-az/index.mdx
@@ -11,7 +11,7 @@ categories:
- kubernetes
- domains-and-dns
dates:
- validation: 2024-04-15
+ validation: 2024-10-21
posted: 2023-04-15
---
@@ -97,7 +97,7 @@ Start by creating a multi-AZ cluster on `fr-par` region, in a dedicated VPC and
tags = ["multi-az"]
type = "kapsule"
- version = "1.28"
+ version = "1.30.2"
cni = "cilium"
delete_additional_resources = true
@@ -163,12 +163,12 @@ Start by creating a multi-AZ cluster on `fr-par` region, in a dedicated VPC and
kubectl get nodes
NAME STATUS ROLES AGE VERSION
- scw-kapsule-multi-az-pool-fr-par-1-61e22198f8c Ready 89s v1.28.0
- scw-kapsule-multi-az-pool-fr-par-1-8334e772ced Ready 82s v1.28.0
- scw-kapsule-multi-az-pool-fr-par-2-1bcf90f3683 Ready 90s v1.28.0
- scw-kapsule-multi-az-pool-fr-par-2-33265e85597 Ready 86s v1.28.0
- scw-kapsule-multi-az-pool-fr-par-3-44b14b7bbbd Ready 84s v1.28.0
- scw-kapsule-multi-az-pool-fr-par-3-863491657c7 Ready 80s v1.28.0
+ scw-kapsule-multi-az-pool-fr-par-1-61e22198f8c Ready 89s v1.30.2
+ scw-kapsule-multi-az-pool-fr-par-1-8334e772ced Ready 82s v1.30.2
+ scw-kapsule-multi-az-pool-fr-par-2-1bcf90f3683 Ready 90s v1.30.2
+ scw-kapsule-multi-az-pool-fr-par-2-33265e85597 Ready 86s v1.30.2
+ scw-kapsule-multi-az-pool-fr-par-3-44b14b7bbbd Ready 84s v1.30.2
+ scw-kapsule-multi-az-pool-fr-par-3-863491657c7 Ready 80s v1.30.2
```
## Nginx ingress controller as a stateless multi-AZ application
diff --git a/tutorials/k8s-velero-backup/index.mdx b/tutorials/k8s-velero-backup/index.mdx
index f88c4673ea..846850de0a 100644
--- a/tutorials/k8s-velero-backup/index.mdx
+++ b/tutorials/k8s-velero-backup/index.mdx
@@ -14,7 +14,7 @@ dates:
posted: 2023-06-02
---
-Velero is an open-source utility designed to facilitate the backup, restoration, and migration of Kubernetes cluster resources and persistent volumes on S3-compatible Object Storage. Originally developed by Heptio, it became part of VMware following an acquisition. Velero offers a straightforward and effective approach to protecting your Kubernetes applications and data through regular backups and supporting disaster recovery measures.
+Velero is an open-source utility designed to facilitate the backup, restoration, and migration of Kubernetes cluster resources and persistent volumes on Amazon S3-compatible Object Storage. Originally developed by Heptio, it became part of VMware following an acquisition. Velero offers a straightforward and effective approach to protecting your Kubernetes applications and data through regular backups and supporting disaster recovery measures.
With Velero, users can generate either scheduled or on-demand backups encompassing the entire cluster or specific namespaces. These backups comprehensively capture the state of all resources within the cluster, including deployments, services, config maps, secrets, and persistent volumes. Velero ensures the preservation of associated metadata and labels, guaranteeing the completeness and accuracy of the backups for potential restoration.
diff --git a/tutorials/large-messages/index.mdx b/tutorials/large-messages/index.mdx
index 4f851c6a4f..6e7d17ffac 100644
--- a/tutorials/large-messages/index.mdx
+++ b/tutorials/large-messages/index.mdx
@@ -4,7 +4,7 @@ meta:
description: Learn how to build a serverless architecture for handling large messages with Scaleway's NATS, Serverless Functions, and Object Storage. Follow our step-by-step Terraform-based tutorial for asynchronous file conversion using messaging, functions, and triggers.
content:
h1: Create a serverless architecture for handling large messages using Scaleway's NATS, Serverless Functions, and Object Storage.
- paragraph: Learn how to build a serverless architecture for handling large messages with Scaleway's NATS, Serverless Functions, and Object Storage. Follow our step-by-step Terraform-based tutorial for asynchronous file conversion using messaging, functions, and triggers.
+ paragraph: Learn how to build a serverless architecture for handling large messages with Scaleway's NATS, Serverless Functions, and Object Storage. Follow our step-by-step Terraform-based tutorial for asynchronous file conversion using messaging, functions, and triggers.
categories:
- messaging
- functions
@@ -52,7 +52,7 @@ Three essential services are required to ensure everything is working together:
Remember that you can refer to the [code repository](https://github.com/rouche-q/serverless-examples/tree/main/projects/large-messages/README.md) to check all code files.
- ```terraform
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -74,7 +74,7 @@ Three essential services are required to ensure everything is working together:
The Scaleway provider is needed, but also three providers from HashiCorp that we will use later in the tutorial.
2. Include two variables to enable the secure passage of your Scaleway credentials. Then initialize the Scaleway provider in the `fr-par-1` region.
- ```terraform
+ ```hcl
variable "scw_access_key_id" {
type = string
sensitive = true
@@ -97,7 +97,7 @@ Three essential services are required to ensure everything is working together:
```
4. Continuing in the `main.tf` file, add the following Terraform code to create an Object Storage bucket that will be used for storing your images.
- ```terraform
+ ```hcl
resource "random_id" "bucket" {
byte_length = 8
}
@@ -119,7 +119,7 @@ Three essential services are required to ensure everything is working together:
In this code, the resource `random_id.bucket` generates a random ID, which is then passed to the object bucket to ensure its uniqueness. Additionally, a `scaleway_object_bucket_acl` ACL is applied to the bucket, setting it to private and outputting the bucket name for use in your producer.
5. Add these resources to create a NATS account and your NATS credentials file:
- ```terraform
+ ```hcl
resource "scaleway_mnq_nats_account" "large_messages" {
name = "nats-acc-large-messages"
}
@@ -162,7 +162,7 @@ As mentioned earlier, the producer will be implemented as a straightforward shel
Our script takes the file path that we want to upload as the first parameter.
To upload the file, we will use the AWS CLI configured with the Scaleway endpoint and credentials because Scaleway Object storage is fully compliant with S3.
-
+
3. Pass the path to the AWS CLI command as follows:
```bash
aws s3 cp $1 s3://$SCW_BUCKET
@@ -211,7 +211,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
```
5. Before proceeding with the function's logic, improve the Terraform code by adding the following code to your `main.tf` file:
- ```terraform
+ ```hcl
resource "null_resource" "install_dependencies" {
provisioner "local-exec" {
command = <<-EOT
@@ -240,7 +240,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
The `null_resource` is used to download and package the correct versions of the libraries that we use with the function. Learn more about this in the [Scaleway documentation.](/serverless/functions/how-to/package-function-dependencies-in-zip/#specific-libraries-(with-needs-for-specific-c-compiled-code))
6. Create the function namespace.
- ```terraform
+ ```hcl
resource "scaleway_function_namespace" "large_messages" {
name = "large-messages-function"
description = "Large messages namespace"
@@ -248,7 +248,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
```
7. Add the resource to set up the function.
- ```terraform
+ ```hcl
resource "scaleway_function" "large_messages" {
namespace_id = scaleway_function_namespace.large_messages.id
runtime = "python311"
@@ -275,15 +275,15 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
Essential environment variables and secrets to use in our function logic are also added.
8. Create the function trigger to "wake up" the function when a NATS message comes in.
- ```terraform
+ ```hcl
resource "scaleway_function_trigger" "large_messages" {
function_id = scaleway_function.large_messages.id
name = "large-messages-trigger"
nats {
account_id = scaleway_mnq_nats_account.large_messages.id
subject = "large-messages"
- }
- }
+ }
+ }
```
It defines which account ID and subject to observe for getting messages.
@@ -296,7 +296,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
secret_access_key = os.getenv("SECRET_ACCESS_KEY")
```
-10. Get the input file name from the body, define the PDF file name from this, and set up the s3 client to upload the file with Scaleway credentials.
+10. Get the input file name from the body, define the PDF file name from this, and set up the Amazon S3 client to upload the file with Scaleway credentials.
```python
input_file = event['body']
output_file = os.path.splitext(input_file)[0] + ".pdf"
@@ -318,7 +318,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
print("Successfully made pdf file")
```
-12. Download the image from the bucket using the s3 client.
+12. Download the image from the bucket using the Amazon S3 client.
```python
s3.download_file(bucket_name, input_file, input_file)
print("Object " + input_file + " downloaded")
@@ -331,7 +331,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
print("Object " + input_file + " uploaded")
```
-14. Put a `try/except` around the code to gracefully handle any errors coming from the S3 client.
+14. Put a `try/except` around the code to gracefully handle any errors coming from the Object Storage client.
```python
try:
s3.download_file(bucket_name, input_file, input_file)
@@ -364,6 +364,6 @@ terraform apply
## Conclusion, going further
In this introductory tutorial, we have demonstrated the usage of the NATS server for Messaging and Queuing, along with other services from the Scaleway ecosystem, to facilitate the transfer of large messages surpassing the typical size constraints. There are possibilities to expand upon this tutorial for various use cases, such as:
-
+
- Extending the conversion capabilities to handle different document types like `docx`.
- Sending URLs directly to NATS and converting HTML content to PDF.
\ No newline at end of file
diff --git a/tutorials/manage-instances-with-terraform-and-functions/index.mdx b/tutorials/manage-instances-with-terraform-and-functions/index.mdx
index cb04ce6113..2b68b0a2e4 100644
--- a/tutorials/manage-instances-with-terraform-and-functions/index.mdx
+++ b/tutorials/manage-instances-with-terraform-and-functions/index.mdx
@@ -57,7 +57,7 @@ This tutorial will simulate a project with a production environment running all
-- variables.tf
```
4. Edit the `backend.tf` file to enable remote configuration backup:
- ```json
+ ```hcl
terraform {
backend "s3" {
bucket = "XXXXXXXXX"
@@ -78,7 +78,7 @@ This tutorial will simulate a project with a production environment running all
*/
```
5. Edit the `provider.tf` file and add Scaleway as a provider:
- ```json
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -91,7 +91,7 @@ This tutorial will simulate a project with a production environment running all
```
6. Specify the following variables in the `variables.tf` file:
- ```json
+ ```hcl
variable "zone" {
type = string
}
@@ -115,7 +115,7 @@ This tutorial will simulate a project with a production environment running all
}
```
7. Add the variable values to `terraform.tfvars`:
- ```bash
+ ```hcl
zone = "fr-par-1"
region = "fr-par"
env = "dev"
@@ -170,7 +170,7 @@ def handle(event, context):
## Configuring your infrastructure
1. Edit the file `main.tf` to add a production Instance using a GP1-S named "Prod":
- ```json
+ ```hcl
## Configuring Producion environment
resource "scaleway_instance_ip" "public_ip-prod" {
project_id = var.project_id
@@ -193,7 +193,7 @@ def handle(event, context):
}
```
2. Add a development Instance using a DEV1-L named "Dev":
- ```json
+ ```hcl
## Configuring Development environment that will be automatically turn off on week-ends and turn on monday mornings
resource "scaleway_instance_ip" "public_ip-dev" {
project_id = var.project_id
@@ -215,7 +215,7 @@ def handle(event, context):
}
```
3. Write a function that will run the code you have just written:
- ```json
+ ```hcl
# Creating function code archive that will then be updated
data "archive_file" "source_zip" {
type = "zip"
@@ -247,7 +247,7 @@ def handle(event, context):
}
```
4. Add a cronjob attached to the function to turn your function off every Friday evening:
- ```json
+ ```hcl
# Adding a first cron to turn off the Instance every friday evening (11:30 pm)
resource "scaleway_function_cron" "turn-off" {
function_id = scaleway_function.main.id
@@ -261,7 +261,7 @@ def handle(event, context):
}
```
5. Create a cronjob attached to the function to turn your function on every Monday morning:
- ```json
+ ```hcl
# Adding a second cron to turn on the Instance every monday morning (7:00 am)
resource "scaleway_function_cron" "turn-on" {
function_id = scaleway_function.main.id
diff --git a/tutorials/mastodon-community/index.mdx b/tutorials/mastodon-community/index.mdx
index 5fdbaa3d8d..cc50fd453b 100644
--- a/tutorials/mastodon-community/index.mdx
+++ b/tutorials/mastodon-community/index.mdx
@@ -18,7 +18,7 @@ Mastodon is an open-source, self-hosted, social media and social networking serv
As there is no central server, you can choose whether to join or leave an instance according to its policy without actually leaving Mastodon Social Network. Mastodon is a part of [Fediverse](https://fediverse.party/), allowing users to interact with users on other platforms that support the same protocol for example: [PeerTube](https://joinpeertube.org/en/), [Friendica](https://friendi.ca/) and [GNU Social](https://gnu.io/social/).
-Mastodon provides the possibility of using [S3 compatible Object Storage](/storage/object/how-to/create-a-bucket/) to store media content uploaded to Instances, making it flexible and scalable.
+Mastodon provides the possibility of using [Amazon S3-compatible Object Storage](/storage/object/how-to/create-a-bucket/) to store media content uploaded to Instances, making it flexible and scalable.
@@ -338,7 +338,7 @@ Mastodon requires access to a PostgreSQL database to store its configuration and
```
Provider Amazon S3
- S3 bucket name: [scaleway_bucket_name]
+ Object Storage bucket name: [scaleway_bucket_name]
S3 region: fr-par
S3 hostname: s3.fr-par.scw.cloud
S3 access key: [scaleway_access_key]
diff --git a/tutorials/migrate-data-minio-client/index.mdx b/tutorials/migrate-data-minio-client/index.mdx
index fa37399a0e..273400d861 100644
--- a/tutorials/migrate-data-minio-client/index.mdx
+++ b/tutorials/migrate-data-minio-client/index.mdx
@@ -14,7 +14,7 @@ dates:
posted: 2019-03-20
---
-The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff, etc. It can communicate with any S3-compatible cloud storage provider and can be used to migrate data from one region to another.
+The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff, etc. It can communicate with any Amazon S3-compatible cloud storage provider and can be used to migrate data from one region to another.
@@ -53,7 +53,7 @@ The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) prov
2. Optionally, add other providers:
- For S3-compatible storage:
+ For Amazon S3-compatible storage:
```
mc config host add s3 --api S3v4
```
@@ -74,7 +74,7 @@ The [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) prov
```
The commands above:
- 1. Migrates data from a S3 compatible Object Storage to Scaleway's **fr-par** Object Storage
+ 1. Migrates data from an Amazon S3-compatible Object Storage to Scaleway's **fr-par** Object Storage
2. Migrates data from GCS Object Storage to Scaleway's **nl-ams** Object Storage
diff --git a/tutorials/migrate-data-rclone/index.mdx b/tutorials/migrate-data-rclone/index.mdx
index 7d105cfba9..fd14d7138f 100644
--- a/tutorials/migrate-data-rclone/index.mdx
+++ b/tutorials/migrate-data-rclone/index.mdx
@@ -14,7 +14,7 @@ dates:
posted: 2019-03-20
---
-Rclone provides a modern alternative to `rsync`. The tool communicates with any S3-compatible cloud storage provider as well as other storage platforms and can be used to migrate data from one bucket to another, even if those buckets are in different regions.
+Rclone provides a modern alternative to `rsync`. The tool communicates with any Amazon S3-compatible cloud storage provider as well as other storage platforms and can be used to migrate data from one bucket to another, even if those buckets are in different regions.
diff --git a/tutorials/monitor-gpu-instance-cockpit/index.mdx b/tutorials/monitor-gpu-instance-cockpit/index.mdx
new file mode 100644
index 0000000000..7ccde487ad
--- /dev/null
+++ b/tutorials/monitor-gpu-instance-cockpit/index.mdx
@@ -0,0 +1,217 @@
+---
+meta:
+ title: Monitor GPU Instances using Cockpit and the NVIDIA Data Center GPU Manager (DCGM) Exporter
+ description: This page explains how to visualize metrics and logs from GPU Instances using Cockpit and the NVIDIA Data Center GPU Manager (DCGM) Exporter
+content:
+ h1: Monitor GPU Instances using Cockpit and the NVIDIA Data Center GPU Manager (DCGM) Exporter
+ paragraph: This page explains how to visualize metrics and logs from GPU Instances using Cockpit and the NVIDIA Data Center GPU Manager (DCGM) Exporter
+tags: cockpit monitor grafana-alloy monitoring nvidia gpu-instance
+categories:
+ - cockpit
+dates:
+ validation: 2024-10-21
+ posted: 2024-10-21
+---
+
+This tutorial guides you through the process of monitoring your [GPU Instances](/compute/instances/concepts/#gpu-instance) using Cockpit and the [NVIDIA Data Center GPU Manager (DCGM) Exporter](https://docs.nvidia.com/datacenter/cloud-native/gpu-telemetry/latest/dcgm-exporter.html). Visualize your GPU Instances' metrics and ensure optimal performance and usage of your resources.
+
+
+
+- A Scaleway account logged into the [console](https://console.scaleway.com)
+- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
+- Created a [GPU Instance](/compute/gpu/how-to/create-manage-gpu-instance/)
+- [Connected to your Instance via SSH](/compute/gpu/how-to/create-manage-gpu-instance/#how-to-connect-to-a-gpu-instance)
+- Installed [Docker Engine](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/install/linux/#install-using-the-repository) on your GPU Instance.
+
+## Create a Cockpit data source and credentials
+
+### Create a Cockpit data source
+
+We are creating a Cockpit data source because your GPU Instance's metrics will be stored in it and the exporter agent needs data source configuration information to then export your Instance's metrics.
+
+1. Create a metrics [custom data source in Cockpit](/observability/cockpit/how-to/create-external-data-sources/). For the sake of this tutorial, we will name it `gpu-instance-metrics`.
+
+
+ - To fill in the cost estimator, you can assume that **1 metric sent without [specific cardinality](https://grafana.com/docs/tempo/latest/metrics-generator/cardinality/)** (ie. without labels or value duplication for a same metric) **every minute will generate around 50 000 samples per month** (60 minutes x 730 hours per month = 43 800 samples). By default, DCGM and node exporter will send multiple metrics and add labels to these metrics leading to a higher number of samples.
+ - **We recommend that you complete this tutorial first** to visualize your data, and **then review your configuration to optimize the number of metrics or labels sent**.
+
+2. Click your metrics data source to view information such as its **URL** and **push path**.
+
+### Create a token
+
+1. Create a [Cockpit token](/observability/cockpit/how-to/create-token/) from the [Scaleway console](https://console.scaleway.com/cockpit/tokens).
+2. Select a region for the data source.
+3. Tick the **Push Metrics** box and click **Create token** to confirm.
+
+
+ Copy and store your token securely. We will use it to allow the Grafana Alloy agent to push your metrics to the metrics data source you have created earlier.
+
+
+## Collect metrics from your GPU Instance
+
+### Install the NVIDIA DCGM Exporter, node exporter and Grafana Alloy agent on your GPU Instance
+
+1. [Connect to your GPU Instance through SSH](/compute/gpu/how-to/create-manage-gpu-instance/#how-to-connect-to-a-gpu-instance).
+2. Copy and paste the following command to create a configuration file named `config.alloy` in your Instance:
+ ```sh
+ touch config.alloy
+ ```
+3. Copy and paste the following template inside `config.alloy`:
+ ```json
+ prometheus.remote_write "cockpit" {
+ endpoint {
+ url = "https://example-afc6-4d02-a2fd-bc020bbaa7d0.metrics.cockpit.fr-par.scw.cloud/api/v1/push"
+ headers = {
+ "X-TOKEN" = "example_bKNpXZZP6BSKiYzV8fiQL1yR_kP_VLB-h0tpYAkaNoVTHVm8q",
+ }
+ }
+ }
+
+ prometheus.scrape "dcgm_exporter" {
+ scrape_interval = "60s"
+ targets = [{__address__ = "dcgm_exporter:9400"}]
+ forward_to = [prometheus.remote_write.cockpit.receiver]
+ }
+
+ prometheus.exporter.unix "node_exporter" {
+ set_collectors = [
+ "uname",
+ "cpu",
+ "cpufreq",
+ "loadavg",
+ "meminfo",
+ "filesystem",
+ "netdev",
+ ]
+ }
+
+ prometheus.scrape "node_exporter" {
+ scrape_interval = "60s"
+ targets = prometheus.exporter.unix.node_exporter.targets
+ forward_to = [prometheus.remote_write.cockpit.receiver]
+ }
+ ```
+4. Replace the values of `cockpit.endpoint.url` (`https://example-afc6-4d02-a2fd-bc020bbaa7d0.metrics.cockpit.fr-par.scw.cloud/api/v1/push`) and `cockpit.endpoint.headers.X-TOKEN` (`example_bKNpXZZP6BSKiYzV8fiQL1yR_kP_VLB-h0tpYAkaNoVTHVm8q`) with the ones of your `gpu-instance-metrics`[Cockpit data source](https://console.scaleway.com/cockpit/dataSource).
+
+ This configuration allows you to:
+ - collect performance data (using `dcgm_exporter`) from your GPU Instance. This includes information like GPU load (how much of the GPU's processing power is being used), temperature, and other relevant metrics.
+ - collect standard Instance metrics with `node_exporter` (CPU load, disk size, etc.)
+ - push the collected data to your Cockpit data source (using `cockpit`).
+
+
+ - The current configuration is set to send only a limited number of metrics from `node_exporter` (the tool collecting CPU, disk, memory, etc. data). Because of this, some data might not show up on your Cockpit dashboards in Grafana when you import them.
+ - If you want to send all available data from `node_exporter`, you need to edit its configuration. Specifically, you need to remove the `set_collectors` list from the configuration. This list defines which metrics are being collected, and removing it will allow all metrics to be sent.
+ - While removing the `set_collectors` list will provide more detailed metrics, it may come with **higher resource usage and associated costs**, especially if you are using a paid service for data monitoring or storage.
+
+
+5. Copy and paste the following command to create a `docker-compose.yaml` file in your Instance:
+ ```sh
+ touch docker-compose.yaml
+ ```
+6. Copy and paste the following configuration inside `docker-compose.yaml`, save it and exit the file.
+ ```yaml
+ services:
+ dcgm_exporter:
+ image: nvcr.io/nvidia/k8s/dcgm-exporter:3.3.0-3.2.0-ubuntu22.04
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: all
+ capabilities: [ gpu ]
+ cap_add:
+ - SYS_ADMIN
+ ports:
+ - "9400:9400"
+
+ agent:
+ image: grafana/alloy:latest
+ ports:
+ - "12345:12345"
+ volumes:
+ - "./config.alloy:/etc/alloy/config.alloy"
+ command: [
+ "run",
+ "--server.http.listen-addr=0.0.0.0:12345",
+ "/etc/alloy/config.alloy",
+ ]
+ ```
+ This configuration will:
+ - deploy the DCGM exporter
+ - deploy the Grafana Alloy agent
+
+7. Run docker services using the following command:
+ ```yaml
+ docker compose up
+ ```
+
+## Create Cockpit dashboards in Grafana
+
+### Create a GPU metrics dashboard
+
+1. Access the **Overview** tab of your [Cockpit](https://console.scaleway.com/cockpit/overview) and click **Open dashboards** to open your Cockpit dashboards in Grafana.
+
+2. Click the **+** icon in the top-right-hand corner, then click **Import dashboard**.
+
+3. Copy the ID (`12219`) of the [Grafana NVIDIA DCGM Exporter dashboard](https://grafana.com/grafana/dashboards/12219-nvidia-dcgm-exporter-dashboard/) and paste it in the **Import via grafana.com** field.
+
+4. Click **Load**.
+
+5. Select your Prometheus data source named `gpu-instance-metrics`, then click **Import**
+
+You should see your dashboard with data such as **GPU Temperature** or **GPU Power Usage**.
+
+
+ If you see only an empty dashboard with the "Dashboard not Found" and "Access denied to this dashboard" error, wait a few seconds and refresh the page. Your dashboard should then display.
+ Alternatively, you can also click the **Menu** icon on the left, then on **Dashboards** and search through your dashboards. You should see your newly created dashboard.
+
+
+### Create a CPU and disk metrics Cockpit dashboard in Grafana
+
+1. Access the **Overview** tab of your [Cockpit](https://console.scaleway.com/cockpit/overview) and click **Open dashboards** to open your Cockpit dashboards in Grafana.
+
+2. Click the **+** icon in the top-right-hand corner, then click **Import dashboard**.
+
+3. Copy the ID (`1860`) of the [Node Exporter Full dashboard](https://grafana.com/grafana/dashboards/1860-node-exporter-full/) and paste it in the **Import via grafana.com** field.
+
+4. Click **Load**.
+
+5. Select your Prometheus data source named `gpu-instance-metrics`, then click **Import**
+
+You should now see your dashboard with data such as **CPU usage** and **Memory Usage**.
+
+
+ If you see only an empty dashboard with the "Dashboard not Found" and "Access denied to this dashboard" error, wait a few seconds and refresh the page. Your dashboard should then display.
+ If you still do not see any data, make sure that you select the `gpu-instance-metrics` in the **Datasource** dropdown list located in the top-left-hand corner.
+
+
+
+ The current configuration of the Node Exporter agent does not include certain metrics, such as:
+ - Swap used: How much swap space (virtual memory) is currently being used by the system.
+ - Root FS used: How much of the root file system (main storage partition) is being used.
+
+
+You can now find your newly created dashboards in your list of Cockpit dashboards in Grafana. This allows you to access your GPU Instances data to monitor and optimize your resources.
+
+### Going further
+
+- **Add more metrics to your dashboards**
+ - Connect to your GPU Instance via SSH
+ - Edit the `config.alloy` file and restart the agents using the `docker compose up` command
+ - Update your Cockpit dashboards in Grafana
+
+- **Create custom dashboards**
+ - In Grafana explore the metrics you have sent by clicking the **Menu** icon on the left, then **Explore**.
+ - Select your custom data source named `gpu-instance-metrics` in the **Datasource** dropdown list located in the top-left-hand corner.
+ - Click **Metrics browser**. You should see a list of metrics appear (for example, `DCGM_FI_DEV_GPU_TEMP` or `node_cpu_seconds_total`).
+ - Write the desired query, click **Run query** to visualize data, and then **Add to dashboard** to add it to a new or existing dashboard.
+
+## Troubleshooting
+
+If you encounter any issues, make sure that you meet all the requirements listed at the beginning of this tutorial.
+
+You can run `docker -v` in your terminal to check your docker version. You should see an output similar to the following:
+ ```
+ Docker version 24.0.6, build ed223bc820
+ ```
diff --git a/tutorials/nvidia-triton/index.mdx b/tutorials/nvidia-triton/index.mdx
index c5a956604e..75967228b7 100644
--- a/tutorials/nvidia-triton/index.mdx
+++ b/tutorials/nvidia-triton/index.mdx
@@ -46,9 +46,9 @@ For this tutorial, we will use a pre-trained model available in the Triton Infer
./fetch_models.sh
```
5. Navigate to the `server/docs/examples/model_repository` directory within the cloned repository.
-6. Upload the example model folder to your bucket in Scaleway Object Storage. You can use the [Scaleway Object Storage API](/storage/object/api-cli/using-api-call-list/), any S3 compatible tool, or web interface to upload the model folder.
+6. Upload the example model folder to your bucket in Scaleway Object Storage. You can use the [Scaleway Object Storage API](/storage/object/api-cli/using-api-call-list/), any Amazon S3-compatible tool, or web interface to upload the model folder.
- You can use the `s3cmd` [command-line tool](/tutorials/s3cmd/) or any other S3-compatible tool to upload your data.
+ You can use the `s3cmd` [command-line tool](/tutorials/s3cmd/) or any other Amazon S3-compatible tool to upload your data.
## Configuring Triton Inference Server
diff --git a/tutorials/object-storage-s3fs/index.mdx b/tutorials/object-storage-s3fs/index.mdx
index f105cc58bd..9074b3f0e6 100644
--- a/tutorials/object-storage-s3fs/index.mdx
+++ b/tutorials/object-storage-s3fs/index.mdx
@@ -13,7 +13,7 @@ dates:
posted: 2018-07-16
---
-In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-fuse) as a client for [Scaleway Object Storage](/storage/object/concepts/#object-storage). `s3fs` is a FUSE-backed file interface for S3, allowing you to mount your S3 buckets on your local Linux or macOS operating system. `s3fs` preserves the native object format for files, so they can be used with other tools including AWS CLI.
+In this tutorial you learn how to use [s3fs](https://github.com/s3fs-fuse/s3fs-fuse) as a client for [Scaleway Object Storage](/storage/object/concepts/#object-storage). `s3fs` is a FUSE-backed file interface for S3, allowing you to mount your Object Storage buckets on your local Linux or macOS operating system. `s3fs` preserves the native object format for files, so they can be used with other tools including AWS CLI.
The version of `s3fs` available for installation using the systems package manager does not support files larger than 10 GB. It is therefore recommended to compile a version, including the required corrections, from the s3fs source code repository. This tutorial will guide you through that process. Note that even with the source code compiled version of s3fs, there is a [maximum file size of 128 GB](#configuring-s3fs) when using s3fs with Scaleway Object Storage.
@@ -92,7 +92,7 @@ Next, download and install `s3fs-fuse` itself:
## Configuring s3fs
-1. Execute the following commands to enter your S3 credentials (separated by a `:`) in a file `$HOME/.passwd-s3fs` and set owner-only permissions. This presumes that you have set your [API credentials](/identity-and-access-management/iam/how-to/create-api-keys/) as environment variables named `ACCESS_KEY` and `SECRET_KEY`:
+1. Execute the following commands to enter your credentials (separated by a `:`) in a file `$HOME/.passwd-s3fs` and set owner-only permissions. This presumes that you have set your [API credentials](/identity-and-access-management/iam/how-to/create-api-keys/) as environment variables named `ACCESS_KEY` and `SECRET_KEY`:
```
echo $ACCESS_KEY:$SECRET_KEY > $HOME/.passwd-s3fs
chmod 600 $HOME/.passwd-s3fs
@@ -123,7 +123,7 @@ Next, download and install `s3fs-fuse` itself:
The file system of the mounted bucket will appear in your OS like a local file system. This means you can access the files as if they were on your hard drive.
-Note that there are some limitations when using S3 as a file system:
+Note that there are some limitations when using Object Storage as a file system:
- Random writes or appends to files require rewriting the entire file
- Metadata operations such as listing directories have poor performance due to network latency
diff --git a/tutorials/restic-s3-backup/index.mdx b/tutorials/restic-s3-backup/index.mdx
index db93f9a8d0..9b6ae1227b 100644
--- a/tutorials/restic-s3-backup/index.mdx
+++ b/tutorials/restic-s3-backup/index.mdx
@@ -15,7 +15,7 @@ dates:
posted: 2022-04-04
---
-Restic is a backup tool that allows you to back up your Linux, Windows, Mac, or BSD machines and send your backups to repositories via [different storage protocols](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html), including S3 (Object Storage).
+Restic is a backup tool that allows you to back up your Linux, Windows, Mac, or BSD machines and send your backups to repositories via [different storage protocols](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html), including Object Storage.
In this tutorial, you learn how to backup a Scaleway Instance running on Ubuntu 20.04 using Restic and Object Storage.
@@ -48,7 +48,7 @@ In this tutorial, you learn how to backup a Scaleway Instance running on Ubuntu
restic version
```
-## Setting up the S3 repository
+## Setting up the Object Storage repository
A repository is the storage space where your backups will be hosted. In this tutorial, we will use Scaleway Object Storage buckets to host our backups.
diff --git a/tutorials/s3-customize-url-cname/index.mdx b/tutorials/s3-customize-url-cname/index.mdx
index 8ed7e55216..3f22787a1c 100644
--- a/tutorials/s3-customize-url-cname/index.mdx
+++ b/tutorials/s3-customize-url-cname/index.mdx
@@ -1,15 +1,15 @@
---
meta:
- title: S3 Object Storage - Customizing URLs with CNAME
+ title: Object Storage - Customizing URLs with CNAME
description: This page shows how to use a customized domain name with Object Storage buckets
content:
- h1: S3 Object Storage - Customizing URLs with CNAME
+ h1: Object Storage - Customizing URLs with CNAME
paragraph: This page shows how to use a customized domain name with Object Storage buckets
categories:
- storage
- object-storage
- domains-and-dns
-tags: Object-Storage CNAME domain S3
+tags: Object-Storage CNAME domain amazon-S3
dates:
validation: 2024-07-16
posted: 2019-05-21
diff --git a/tutorials/setup-nginx-reverse-proxy-s3/index.mdx b/tutorials/setup-nginx-reverse-proxy-s3/index.mdx
index fa38b87eeb..091f7480fc 100644
--- a/tutorials/setup-nginx-reverse-proxy-s3/index.mdx
+++ b/tutorials/setup-nginx-reverse-proxy-s3/index.mdx
@@ -1,11 +1,11 @@
---
meta:
- title: Setting up Nginx as a reverse proxy with S3 Object Storage
- description: Learn how to configure an Nginx reverse proxy with Scaleway Object Storage (S3) for optimized access and caching.
+ title: Setting up Nginx as a reverse proxy with Object Storage
+ description: Learn how to configure an Nginx reverse proxy with Scaleway Object Storage for optimized access and caching.
content:
- h1: Setting up Nginx as a reverse proxy with S3 Object Storage
- paragraph: This guide shows you how to configure an Nginx reverse proxy with Scaleway S3 Object Storage for optimized access and caching.
-tags: Object-Storage, S3, reverse-proxy, nginx
+ h1: Setting up Nginx as a reverse proxy with Object Storage
+ paragraph: This guide shows you how to configure an Nginx reverse proxy with Scaleway Object Storage for optimized access and caching.
+tags: Object-Storage amazon-S3 reverse-proxy nginx
categories:
- instances
- object-storage
@@ -156,7 +156,7 @@ You can now access the files of your bucket by going directly to `http://s3proxy
## Configuring Nginx as a reverse proxy for HTTPS
-Connections to your S3 proxy are currently available in plain, unencrypted HTTP only. It is possible to encrypt the connection between the client and the Nginx proxy by configuring HTTPS. To do so, we will obtain a free SSL certificate issued by [Let's Encrypt](https://letsencrypt.org/) using [Certbot](https://certbot.eff.org/), a tool to obtain, manage and renew Let's Encrypt certificates automatically.
+Connections to your proxy are currently available in plain, unencrypted HTTP only. It is possible to encrypt the connection between the client and the Nginx proxy by configuring HTTPS. To do so, we will obtain a free SSL certificate issued by [Let's Encrypt](https://letsencrypt.org/) using [Certbot](https://certbot.eff.org/), a tool to obtain, manage and renew Let's Encrypt certificates automatically.
1. Add the Certbot repository to apt to download the latest release of the software. Certbot is in active development and the packages included in Ubuntu may be already outdated.
```
diff --git a/tutorials/sinatra/index.mdx b/tutorials/sinatra/index.mdx
index eed21f195b..f7e760375b 100644
--- a/tutorials/sinatra/index.mdx
+++ b/tutorials/sinatra/index.mdx
@@ -9,7 +9,7 @@ tags: ansible Sinatra Ruby RubyGems
categories:
- instances
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2018-08-17
---
diff --git a/tutorials/snapshot-instances-jobs/index.mdx b/tutorials/snapshot-instances-jobs/index.mdx
index 99c358473f..7b04845764 100644
--- a/tutorials/snapshot-instances-jobs/index.mdx
+++ b/tutorials/snapshot-instances-jobs/index.mdx
@@ -169,7 +169,7 @@ Serverless Jobs rely on containers to run in the cloud, and therefore require a
1. Create a `Dockerfile`, and add the following code to it:
- ```docker
+ ```dockerfile
# Using apline/golang image
FROM golang:1.22-alpine
diff --git a/tutorials/store-s3-cyberduck/index.mdx b/tutorials/store-s3-cyberduck/index.mdx
index af2c2f2daa..334b82bb05 100644
--- a/tutorials/store-s3-cyberduck/index.mdx
+++ b/tutorials/store-s3-cyberduck/index.mdx
@@ -1,16 +1,16 @@
---
meta:
- title: Storing objects with Object Storage and Cyberduck
- description: This page shows you how to store objects with Cyberduck.
+ title: Storing objects with Scaleway Object Storage and Cyberduck
+ description: This page shows you how to store objects with Cyberduck on Scaleway Object Storage.
content:
- h1: Storing objects with Object Storage and Cyberduck
- paragraph: This page shows you how to store objects with Cyberduck.
+ h1: Storing objects with Scaleway Object Storage and Cyberduck
+ paragraph: This page shows you how to store objects with Cyberduck on Scaleway Object Storage.
tags: Cyberduck Object-Storage
categories:
- storage
- object-storage
dates:
- validation: 2024-04-22
+ validation: 2024-10-28
posted: 2018-06-04
---
diff --git a/tutorials/store-wp-mediacloud-s3/index.mdx b/tutorials/store-wp-mediacloud-s3/index.mdx
index 46a34ed460..4ef01400f2 100644
--- a/tutorials/store-wp-mediacloud-s3/index.mdx
+++ b/tutorials/store-wp-mediacloud-s3/index.mdx
@@ -10,7 +10,7 @@ categories:
- object-storage
- instances
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2019-02-13
---
diff --git a/tutorials/strapi-app-serverless-containers-sqldb/index.mdx b/tutorials/strapi-app-serverless-containers-sqldb/index.mdx
index c416876cb1..9d5c2e76e8 100644
--- a/tutorials/strapi-app-serverless-containers-sqldb/index.mdx
+++ b/tutorials/strapi-app-serverless-containers-sqldb/index.mdx
@@ -42,11 +42,11 @@ You can either deploy your application:
2. Run the command below to make sure the environment variables are properly set:
```sh
- scw info
+ scw info
```
This command displays your access key and secret key in the last two lines of the output. The `ORIGIN` column should display `env (SCW_ACCESS_KEY)` and `env (SCW_SECRET_KEY)`, and not `default profile`.
-
+
```bash
KEY VALUE ORIGIN
(...)
@@ -77,16 +77,16 @@ You can either deploy your application:
&& psql -h $DATABASE_HOST -p $DATABASE_PORT \
-d $DATABASE_NAME -U $DATABASE_USERNAME
```
- An input field with the name of your database should display:
+ An input field with the name of your database should display:
```
psql (15.3, server 16.1 (Debian 16.1-1.pgdg120+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_128_GCM_SHA256, compression: off)
- Type "help" for help.
+ Type "help" for help.
tutorial-strapi-blog-db=>
```
-
+
### Running Strapi locally
1. Create a Strapi blog template
@@ -95,7 +95,7 @@ You can either deploy your application:
--dbclient=postgres --dbhost=$DATABASE_HOST \
--dbport=$DATABASE_PORT --dbname=$DATABASE_NAME \
--dbusername=$DATABASE_USERNAME \
- --dbpassword=$DATABASE_PASSWORD --dbssl=true
+ --dbpassword=$DATABASE_PASSWORD --dbssl=true
```
2. Access the folder you just created:
@@ -120,7 +120,7 @@ You can either deploy your application:
touch Dockerfile
```
2. Add the code below to your file, save it, and exit.
- ```bash
+ ```docker
# Creating a multi-stage build for production
FROM node:20-alpine as build
RUN apk update && apk add --no-cache build-base gcc autoconf automake zlib-dev libpng-dev vips-dev git > /dev/null 2>&1
@@ -189,12 +189,12 @@ You can either deploy your application:
├── jsconfig.json
├── package.json
├── README.md
- └── yarn.lock
+ └── yarn.lock
```
5. Build your application container:
```bash
- docker build -t my-strapi-blog .
+ docker build -t my-strapi-blog .
```
The docker build image process can take a few minutes, particularly during the `npm install` step, since Strapi requires around 1 GB of node modules to be built.
@@ -281,7 +281,7 @@ You can either deploy your application:
```
When the status appears as `ready`, you can access the Strapi Administration Panel via your browser.
-
+
3. Copy the endpoint URL displayed next to the `DomainName` property, and paste it into your browser. The main Strapi page displays. Click "Open the administration" or add `/admin` to your browser URL to access the Strapi Administration Panel.
4. (Optional) You can check that Strapi APIs are working with the following command, or by accessing `https://{container_url}/api/articles` in your browser:
@@ -319,7 +319,7 @@ However, your Strapi container currently connects to your database with your [us
To secure your deployment, we will now add a dedicated [IAM application](/identity-and-access-management/iam/concepts/#application), give it the minimum required permissions, and provide its credentials to your Strapi container.
-1. Run the following command to create an [IAM application](/identity-and-access-management/iam/concepts/#application) and export it as a variable:
+1. Run the following command to create an [IAM application](/identity-and-access-management/iam/concepts/#application) and export it as a variable:
```bash
export SCW_APPLICATION_ID=$(scw iam application create name=tutorial-strapi-blog -o json | jq -r '.id')
```
@@ -364,10 +364,10 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
secret-environment-variables.5.value=$JWT_SECRET redeploy=true
```
-6. Refresh your browser page displaying the Strapi Administration Panel. An updated version displays.
+6. Refresh your browser page displaying the Strapi Administration Panel. An updated version displays.
You have now deployed a full serverless Strapi blog example!
-
+
## Going further with containers
- Inspect your newly created resources in the Scaleway console:
@@ -399,11 +399,11 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
2. Run the command below to make sure the environment variables are properly set:
```sh
- scw info
+ scw info
```
This command displays your access_key and secret_key in the two last lines of the output. The `ORIGIN` column should display `env (SCW_ACCESS_KEY)` and `env (SCW_SECRET_KEY)`, and not `default profile`.
-
+
```bash
KEY VALUE ORIGIN
(...)
@@ -512,12 +512,12 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
├── jsconfig.json
├── package.json
├── README.md
- └── yarn.lock
+ └── yarn.lock
```
8. Build your application container:
```bash
- docker build -t my-strapi-blog .
+ docker build -t my-strapi-blog .
```
The docker build image process can take a few minutes, particularly during the `npm install` step since Strapi requires around 1 GB of node modules to be built.
@@ -545,7 +545,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
docker push $REGISTRY_ENDPOINT/my-strapi-blog:latest
```
-
+
### Creating the Terraform configuration
1. Run the following command to create a new folder to store your Terraform files, and access it:
@@ -553,7 +553,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
cd ..
mkdir terraform-strapi-blog &&
cd terraform-strapi-blog
- ```
+ ```
2. Create an empty `main.tf` Terraform file inside the folder.
@@ -565,7 +565,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
```
3. Add the following code to your `main.tf` file:
- ```json
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -577,12 +577,12 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
}
required_version = ">= 0.13"
}
-
+
variable "REGISTRY_ENDPOINT" {
type = string
description = "Container Registry endpoint where your application container is stored"
}
-
+
variable "DEFAULT_PROJECT_ID" {
type = string
description = "Project ID where your resources will be created"
@@ -606,12 +606,12 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
for_each = toset(local.secrets)
length = 16
}
-
+
resource scaleway_container_namespace main {
name = "tutorial-strapi-blog-tf"
description = "Namespace created for full serverless Strapi blog deployment"
}
-
+
resource scaleway_container main {
name = "tutorial-strapi-blog-tf"
@@ -628,7 +628,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
privacy = "public"
protocol = "http1"
deploy = true
-
+
environment_variables = {
"DATABASE_CLIENT"="postgres",
"DATABASE_USERNAME" = scaleway_iam_application.app.id,
@@ -648,11 +648,11 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
"JWT_SECRET" = random_bytes.generated_secrets["jwt_secret"].base64
}
}
-
+
resource scaleway_iam_application "app" {
name = "tutorial-strapi-blog-tf"
}
-
+
resource scaleway_iam_policy "db_access" {
name = "tutorial-strapi-policy-tf"
description = "Gives tutorial Strapi blog access to Serverless SQL Database"
@@ -662,17 +662,17 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
permission_set_names = ["ServerlessSQLDatabaseReadWrite"]
}
}
-
+
resource scaleway_iam_api_key "api_key" {
application_id = scaleway_iam_application.app.id
}
-
+
resource scaleway_sdb_sql_database "database" {
name = "tutorial-strapi-tf"
min_cpu = 0
max_cpu = 8
}
-
+
output "database_connection_string" {
// Output as an example, you can give this string to your application
value = format("postgres://%s:%s@%s",
@@ -682,7 +682,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
)
sensitive = true
}
-
+
output "container_url" {
// Output as an example, you can give this string to your application
value = scaleway_container.main.domain_name
@@ -718,7 +718,7 @@ The Terraform file creates several resources:
```
Edit the `ADMIN_EMAIL` and `ADMIN_PASSWORD` values with your own email and password. Optionally, you can also edit `ADMIN_FIRSTNAME` and `ADMIN_LASTNAME` values to change the default admin first and last name.
- Strapi admin password requires at least 8 characters including one uppercase, one lowercase, one number, and one special character.
+ Strapi admin password requires at least 8 characters including one uppercase, one lowercase, one number, and one special character.
If the admin password or email does not meet the requirements, the container will not start.
@@ -813,7 +813,7 @@ Once you are done, run the following command to stop all your resources:
- **Fine-tune deployment options** such as autoscaling, targeted regions, and more. You can find more information by typing `scw container deploy --help` in your terminal, or by referring to the [dedicated documentation](/serverless/containers/how-to/manage-a-container/)
- Create a secondary production environment by duplicating your built container, building it in `NODE_ENV=production` environment, running `npm run start`, and plugging it onto another **Serverless SQL Database**. For instance, this will allow you to edit content-types which is not possible in production.
-
+
## Troubleshooting
If you happen to encounter any issues, first check that you meet all the requirements.
@@ -827,7 +827,7 @@ If you happen to encounter any issues, first check that you meet all the require
UpdatedAt 1 year ago
Description -
```
-
+
You can also find and compare your Project and Organization ID in the [Scaleway console settings](https://console.scaleway.com/project/settings).
- You have **Docker Engine** installed. Running the `docker -v` command in a terminal should display your currently installed docker version:
diff --git a/tutorials/systemd-essentials/index.mdx b/tutorials/systemd-essentials/index.mdx
index c3532e21df..b0e6b216c9 100644
--- a/tutorials/systemd-essentials/index.mdx
+++ b/tutorials/systemd-essentials/index.mdx
@@ -9,7 +9,7 @@ tags: systemd instances
categories:
- instances
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2018-07-10
---
diff --git a/tutorials/terraform-quickstart/index.mdx b/tutorials/terraform-quickstart/index.mdx
index 004d9066df..e3c89f6add 100644
--- a/tutorials/terraform-quickstart/index.mdx
+++ b/tutorials/terraform-quickstart/index.mdx
@@ -12,7 +12,7 @@ categories:
tags: Terraform Elastic-Metal Instances HashiCorp
hero: assets/scaleway_terraform.webp
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2018-04-06
---
@@ -51,22 +51,45 @@ The installation of Terraform on Windows can be done in a single command line us
The installation of Terraform on Linux can be done in a few simple steps.
-1. Download the HashiCorp GPG key on your machine.
+1. Ensure your system is up to date and you have installed the `gnupg`, `software-properties-common`, and `curl` packages.
```
- wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
+ apt update && apt install -y gnupg software-properties-common
```
-2. Add the Terraform repositories to the apt sources.
+2. Download the HashiCorp GPG key on your machine.
```
- echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
+ wget -O- https://apt.releases.hashicorp.com/gpg | \
+ gpg --dearmor | \
+ sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
```
-3. Update the apt packet cache and install Terraform using `apt`.
+3. Verify the key's fingerprint.
+ ```
+ gpg --no-default-keyring \
+ --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
+ --fingerprint
+ ```
+ The `gpg` command reports the key's fingerprint:
+ ```
+ /usr/share/keyrings/hashicorp-archive-keyring.gpg
+ -------------------------------------------------
+ pub rsa4096 XXXX-XX-XX [SC]
+ AAAA AAAA AAAA AAAA
+ uid [ unknown] HashiCorp Security (HashiCorp Package Signing)
+ sub rsa4096 XXXX-XX-XX [E]
+ ```
+4. Add the Terraform repositories to the apt sources.
+ ```
+ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
+ https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
+ sudo tee /etc/apt/sources.list.d/hashicorp.list
+ ```
+5. Update the apt packet cache and install Terraform using `apt`.
```
apt update && apt install terraform
```
-4. Test the installation by running the `terraform version` command.
+6. Test the installation by running the `terraform version` command.
```
terraform version
- Terraform v1.8.1
+ Terraform v1.9.8
on linux_amd64
```
@@ -590,7 +613,7 @@ Apply the new configuration using `terraform apply`. Terraform will add an Elast
## Storing the Terraform state in the cloud
-Optionally, you can use the S3 Backend to store your Terraform state in a [Scaleway Object Storage](https://www.scaleway.com/en/object-storage/). Configure your backend as follows:
+Optionally, you can store your Terraform state with [Scaleway Object Storage](https://www.scaleway.com/en/object-storage/). Configure your backend as follows:
```json
terraform {
diff --git a/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx b/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx
index 68b922460d..6fa010894c 100644
--- a/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx
+++ b/tutorials/transform-bucket-images-triggers-functions-deploy/index.mdx
@@ -1,10 +1,10 @@
---
meta:
- title: Transforming images in an S3 bucket using Serverless Functions and Triggers - Deployment
- description: This page shows you how to create and deploy functions to transform images in an S3 bucket using Serverless Functions and Triggers
+ title: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Deployment
+ description: This page shows you how to create and deploy functions to transform images in an Object Storage bucket using Serverless Functions and Triggers
content:
- h1: Transforming images in an S3 bucket using Serverless Functions and Triggers - Deployment
- paragraph: This page shows you how to create and deploy functions to transform images in an S3 bucket using Serverless Functions and Triggers
+ h1: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Deployment
+ paragraph: This page shows you how to create and deploy functions to transform images in an Object Storage bucket using Serverless Functions and Triggers
categories:
- functions
- messaging
@@ -52,7 +52,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri
const SQS_ENDPOINT = process.env.SQS_ENDPOINT;
const S3_ENDPOINT = `https://s3.${S3_REGION}.scw.cloud`;
- // Create S3 service object
+ // Create Object Storage service object
const s3Client = new S3Client({
credentials: {
accessKeyId: S3_ACCESS_KEY_ID,
@@ -174,7 +174,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri
width = 200;
}
- // Create S3 service object
+ // Create Object Storage service object
const s3Client = new S3Client({
credentials: {
accessKeyId: S3_ACCESS_KEY_ID,
@@ -222,7 +222,7 @@ You will now learn how to deploy Serverless Functions and connect them using tri
};
};
- // Download the image from the S3 source bucket.
+ // Download the image from the Object Storage source bucket.
try {
const input = {
Bucket: SOURCE_BUCKET,
diff --git a/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx b/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx
index 0314dc3f5c..1960b96af4 100644
--- a/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx
+++ b/tutorials/transform-bucket-images-triggers-functions-set-up/index.mdx
@@ -1,10 +1,10 @@
---
meta:
- title: Transforming images in an S3 bucket using Serverless Functions and Triggers - Set up
- description: This page shows you how to set up your environment to transform images in an S3 bucket using Serverless Functions and Triggers
+ title: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Set up
+ description: This page shows you how to set up your environment to transform images in an Object Storage bucket using Serverless Functions and Triggers
content:
- h1: Transforming images in an S3 bucket using Serverless Functions and Triggers - Set up
- paragraph: This page shows you how to set up your environment to transform images in an S3 bucket using Serverless Functions and Triggers
+ h1: Transforming images in an Object Storage bucket using Serverless Functions and Triggers - Set up
+ paragraph: This page shows you how to set up your environment to transform images in an Object Storage bucket using Serverless Functions and Triggers
categories:
- messaging
- functions
diff --git a/tutorials/trigger-ifttt-actions/index.mdx b/tutorials/trigger-ifttt-actions/index.mdx
index cb8fc8b3b9..b31c49155a 100644
--- a/tutorials/trigger-ifttt-actions/index.mdx
+++ b/tutorials/trigger-ifttt-actions/index.mdx
@@ -10,12 +10,10 @@ hero: assets/scaleway_ifttt.webp
categories:
- iot-hub
dates:
- validation: 2024-04-22
+ validation: 2024-10-29
posted: 2021-01-04
---
-## Quick & easy application creation with IoT Hub and IFTTT
-
IFTTT, an acronym for "If This, Then That," offers a user-friendly yet robust automation service, enabling users to trigger actions based on specific events.
With an extensive array of customizable events and actions at your fingertips, the possibilities are virtually endless — an incredibly empowering feature.
diff --git a/tutorials/veeam-backup-replication-s3/index.mdx b/tutorials/veeam-backup-replication-s3/index.mdx
index e160419678..31ac4ddfea 100644
--- a/tutorials/veeam-backup-replication-s3/index.mdx
+++ b/tutorials/veeam-backup-replication-s3/index.mdx
@@ -18,7 +18,7 @@ dates:
The solution provides backup, restore, and replication functionality for virtual machines, physical servers, and workstations as well as cloud-based workloads.
-A native S3 interface for Veeam Backup & Replication is part of the Release 9.5 update 4, available in General Availability since January 22nd, 2019. It allows to push backups to an S3-compatible service to maximize backup capacity.
+A native Object Storage interface for Veeam Backup & Replication is part of the Release 9.5 update 4, available in General Availability since January 22nd, 2019. It allows to push backups to an Amazon S3-compatible service to maximize backup capacity.
The following schema represents the functionality of Veeam Backup and Restore which acts as an intermediate agent to manage primary data storage and secondary and archival storage:
@@ -78,7 +78,7 @@ The following schema represents the functionality of Veeam Backup and Restore wh
For a bucket located in the Amsterdam region, the service point is `s3.nl-ams.scw.cloud` and the region is `nl-ams`.
-11. Veeam will connect to the S3 infrastructure and download the list of Object Storage Buckets. Choose the bucket to be used with Veeam from the drop-down list, click **Browse**, and create and select the folder for storing backups. Then click **Next**:
+11. Veeam will connect to the Object Storage infrastructure and download the list of buckets. Choose the bucket to be used with Veeam from the drop-down list, click **Browse**, and create and select the folder for storing backups. Then click **Next**:
@@ -87,7 +87,7 @@ The following schema represents the functionality of Veeam Backup and Restore wh
### Configuring a local backup repository
-1. As Veeam cannot currently push backups directly to S3, a local backup repository is required which will be configured as **Storage Tier** with Object Storage in a later step. Click **Add Repository**:
+1. As Veeam cannot currently push backups directly to an Amazon S3-compatible system, a local backup repository is required which will be configured as **Storage Tier** with Object Storage in a later step. Click **Add Repository**:
2. Choose **Direct Attached Storage** from the provided options:
@@ -175,7 +175,7 @@ This section is designed to help you solve common issues encountered while perfo
#### Cause
-The application cannot access the S3 resource.
+The application cannot access the Object Storage resource.
#### Solution
@@ -200,7 +200,7 @@ Scaleway Object Storage applies a rate limit on PUT operations for safety reason
#### Solution
-You can limit the number of concurrent tasks and update the timeout duration of S3 requests on the Veeam Backup & Replication server managing the backup copy operation by adding the elements below:
+You can limit the number of concurrent tasks and update the timeout duration of Object Storage requests on the Veeam Backup & Replication server managing the backup copy operation by adding the elements below:
```
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
@@ -228,7 +228,7 @@ You may experience reduced throughput due to the limitation.
If you did not manage to identify the error and solve it by yourself, [open a support ticket](/console/account/how-to/open-a-support-ticket/), and provide as many details as possible, along with the necessary information below:
-- S3 Endpoint (e.g. `s3.fr-par.scw.cloud`)
+- Object Storage Endpoint (e.g. `s3.fr-par.scw.cloud`)
- Bucket name
- Object name (if the request concerns an object)
- Request type (PUT, GET, etc.)
diff --git a/tutorials/wordpress-lemp-stack-focal/index.mdx b/tutorials/wordpress-lemp-stack-focal/index.mdx
index 003a130a40..5ac58b0d38 100644
--- a/tutorials/wordpress-lemp-stack-focal/index.mdx
+++ b/tutorials/wordpress-lemp-stack-focal/index.mdx
@@ -9,7 +9,7 @@ tags: WordPress cms php LEMP nginx mysql mariadb
categories:
- instances
dates:
- validation: 2024-04-22
+ validation: 2024-10-28
posted: 2021-12-03
---