Skip to content

Commit

Permalink
Revise 'OK' -> 'Okay' for softness
Browse files Browse the repository at this point in the history
  • Loading branch information
suny-am committed Jun 18, 2024
1 parent 23246dd commit bf054ae
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions advanced-techniques/benchmarking/time.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ submit_to_do_on_gpu(write_current_time, end_timestamp_query)

We must then **fetch** the timestamp values back to the CPU, through a mapped buffer like we see in [Playing with buffers](../../basic-3d-rendering/input-geometry/playing-with-buffers.md#mapping-context).

> 🫡 OK, got it, so what about actual C++ code?
> 🫡 Okay, got it, so what about actual C++ code?
Whether they measure timestamps or other things, GPU queries are stored in a `QuerySet`. We typically store both the start and end time in the same set:

Expand Down Expand Up @@ -293,7 +293,7 @@ Reading timestamps

### Resolving timestamps

Okey, the render pass writes to our first query when it begins, and writes to the second query when it ends. We only need to compute the difference now, right? But the timestamps still **live in the GPU memory**, so we first need to **fetch them back** to the CPU.
Okay, the render pass writes to our first query when it begins, and writes to the second query when it ends. We only need to compute the difference now, right? But the timestamps still **live in the GPU memory**, so we first need to **fetch them back** to the CPU.

The first step consists in **resolving** the query. This gets the timestamp values from whatever internal representation the WebGPU implementation uses to store query set and write them in a **GPU buffer**.

Expand Down
2 changes: 1 addition & 1 deletion appendices/custom-extensions/with-dawn.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ I leave the feature state to `Stable` for the sake of simplicity. If you want to

### Backend change (Vulkan)

OK now our feature is correctly wired up in the internal API, but so far **none of the backends support it**! At this stage we must focus on **a single one at a time**.
Okay, now our feature is correctly wired up in the internal API, but so far **none of the backends support it**! At this stage we must focus on **a single one at a time**.

We start with **Vulkan**, looking inside `dawn/src/dawn/native/vulkan`. So let's first force the Vulkan backend in our application:

Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/hello-triangle.md
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ pipelineDesc.multisample.mask = ~0u;
pipelineDesc.multisample.alphaToCoverageEnabled = false;
```

Okey, we finally **configured all the stages** of the render pipeline. All that remains now is to specify the behavior of the two **programmable stages**, namely give a **vertex** and a **fragment shaders**.
Okay, we finally **configured all the stages** of the render pipeline. All that remains now is to specify the behavior of the two **programmable stages**, namely give a **vertex** and a **fragment shaders**.

Shaders
-------
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/shader-uniforms/a-first-uniform.md
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ The fields `binding.sampler` and `binding.textureView` are only needed when the

### Usage

OK we are now ready to connect the dots! It is as simple as setting the bind group to use before the draw call:
Okay, we are now ready to connect the dots! It is as simple as setting the bind group to use before the draw call:

````{tab} With webgpu.hpp
```C++
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/some-interaction/lighting-control.md
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@ In this chapter we have:
- Connected lighting with GUI.
- Created a custom GUI.
OK, we are now ready to dive into material models for real!
Okay, we are now ready to dive into material models for real!
````{tab} With webgpu.hpp
*Resulting code:* [`step100`](https://github.com/eliemichel/LearnWebGPU-Code/tree/step100)
Expand Down
4 changes: 2 additions & 2 deletions basic-compute/compute-pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,7 +332,7 @@ The workgroup sizes must be constant expressions.

### Workgroup size vs count

> 😟 OK, that makes a lot of variables just to set a number of jobs that is just the product of them in the end, doesn't it?
> 😟 Okay, that makes a lot of variables just to set a number of jobs that is just the product of them in the end, doesn't it?
The thing is: **all combinations are not equivalent**, even if they multiply to the same number of threads.

Expand All @@ -356,7 +356,7 @@ These rules are somehow contradictory. Only a benchmark on your specific use cas

### Workgroup dimensions

> 😟 Ok I see better now, but what about the different axes $w$, $h$ and $d$? Is a workgroup size of $2 \times 2 \times 4$ different from $16 \times 1 \times 1$?
> 😟 Okay, I see better now, but what about the different axes $w$, $h$ and $d$? Is a workgroup size of $2 \times 2 \times 4$ different from $16 \times 1 \times 1$?
It is different indeed, because this size **gives hints to the hardware** about the potential **consistency of memory access** across threads.

Expand Down

0 comments on commit bf054ae

Please sign in to comment.