Skip to content

Commit

Permalink
Fix typos in the docs (shader-slang#4322)
Browse files Browse the repository at this point in the history
  • Loading branch information
aleino-nv committed Jun 10, 2024
1 parent 9a23a9a commit 0974463
Show file tree
Hide file tree
Showing 7 changed files with 32 additions and 32 deletions.
16 changes: 8 additions & 8 deletions docs/design/capabilities.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ capability opengl : khronos;

Here we are saying that `sm_5_1` supports everything `sm_5_0` supports, and potentially more. We are saying that `d3d12` supports `sm_6_0` but maybe not, e.g., `sm_6_3`.
We are expressing that fact that having a `glsl_*` capability means you are on some Khronos API target, but that it doesn't specify which one.
(The extact details of these declarations obviously aren't the point; getting a good hierarchy of capabilites will take time)
(The extact details of these declarations obviously aren't the point; getting a good hierarchy of capabilites will take time.)

Capability Composition
----------------------
Expand Down Expand Up @@ -119,7 +119,7 @@ void myFunc();
```

This function should be equivalent to one with just a single `[availableFor((vulkan & fragment) | (d3d12 & fragment))]` which is equivalent to `[availableFor((vulkan | d3d12) & fragment)]`.
Simplification should generally push toward "disjunctive normal form," though, rather than puruse simplifications like that.
Simplification should generally push toward "disjunctive normal form," though, rather than pursue simplifications like that.
Note that we do *not* include negation, so that capabilities are not general Boolean expressions.

Validation
Expand All @@ -130,7 +130,7 @@ For a given function definition `F`, the front end will scan its body and see wh
If `F` doesn't have an `[availableFor(...)]` attribute, then we can derive its *effective* `[availableFor(...)]` capability as `R` (this probably needs to be expressed as an iterative dataflow problem over the call graph, to handle cycles).

If `F` *does* have one or more `[availabelFor(...)]` clauses that amount to a declared capability `C` (again in disjunctive normal form), then we can check that `C` implies `R` and error out if it is not the case.
A reasonable implementation would track which calls introduced with requirements, and be able to explain *why* `C` does not capture the stated requirements.
A reasonable implementation would track which calls introduced which requirements, and be able to explain *why* `C` does not capture the stated requirements.

For a shader entry point, we should check it as if it had an `[availableFor(...)]` that is the OR of all the specified target profiles (e.g., `sm_5_0 | glsl_450 | ...`) ANDed with the specified stage (e.g., `fragment`).
Any error here should be reported to the user.
Expand All @@ -151,7 +151,7 @@ It should be possible to define multiple versions of a function, having differen
[availableFor(d3d12)] void myFunc() { ... }
```

For front-end checking, these should be treated as if they were a single definition of `myFunc` with a ORed capability (e.g., `vulkan | d3d12`).
For front-end checking, these should be treated as if they were a single definition of `myFunc` with an ORed capability (e.g., `vulkan | d3d12`).
Overload resoultion will pick the "best" candidate at a call site based *only* on the signatures of the function (note that this differs greatly from how profile-specific function overloading works in Cg).

The front-end will then generate initial IR code for each definition of `myFunc`.
Expand All @@ -177,7 +177,7 @@ So far I've talked about capabilities on functions, but they should also be allo

We should also provide a way to specify that a `register` or other layout modifier is only applicable for specific targets/stages. Such a capability nominally exists in HLSL today, but it would be much more useful if it could be applied to specify target-API-specific bindings.

Only functions should support overloading based on capability. in all other cases there can only be one definition of an entity, and capabilities just decide when it is available.
Only functions should support overloading based on capability. In all other cases there can only be one definition of an entity, and capabilities just decide when it is available.

API Extensions as Capabilities
------------------------------
Expand All @@ -192,14 +192,14 @@ capability KHR_secret_sauce : vulkan;
void improveShadows();
```

When generating code for Vulkan, we should be able to tell the user that the `improveShadows()` function requires the given extension. The user should be able to expression compositions of capabilities in their `-profile` option (and similarly for the API):
When generating code for Vulkan, we should be able to tell the user that the `improveShadows()` function requires the given extension. The user should be able to express compositions of capabilities in their `-profile` option (and similarly for the API):

```
slangc code.slang -profile vulkan+KHR_secret_sauce
```
(Note that for the command line, it is beneficial to use `+` instead of `&` to avoid conflicts with shell interpreters)

And important question is whether the compiler should automatically infer required extensions without them being specified, so that it produces SPIR-V that requires extensions the user didn't ask for.
An important question is whether the compiler should automatically infer required extensions without them being specified, so that it produces SPIR-V that requires extensions the user didn't ask for.
The argument against such inference is that users should opt in to non-standard capabilities they are using, but it would be unfortunate if this in turn requires verbose command lines when invoking the compiler.
It should be possible to indicate the capabilities that a module or entry point should be compiled to use without command-line complications.

Expand Down Expand Up @@ -268,4 +268,4 @@ Conclusion
----------

Overall, the hope is that in many cases developers will be able to use capability-based partitioning and overloading of APIs to build code that only has to pass through the Slang front-end once, but that can then go through back-end code generation for each target.
In cases where this can't be achieved, the way that capability-based overloading is built into the Slang ir design means that we should be able to merge multiple target-specific definitions into one IR module, so that a library can employ target-specific specializations while still presenting a single API to consumers.
In cases where this can't be achieved, the way that capability-based overloading is built into the Slang IR design means that we should be able to merge multiple target-specific definitions into one IR module, so that a library can employ target-specific specializations while still presenting a single API to consumers.
2 changes: 1 addition & 1 deletion docs/design/casting.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ These functions will also work with types that do not have Vtbl - like IRInst de

Both 'as' and 'dynamicCast' handle the case if the pointer is a nullptr, by returning a nullptr. If the cast succeeds the cast pointer is returned otherwise nullptr is returned. If a cast is performed with a free function it always returns a raw pointer.

So why have 'as' and 'dynamicCast' - they seem sort of similar? The primary difference is dynamicCast *must* always return a pointer to the same object, whilst 'as' *can* return a pointer to a different object if that is the desired 'normal' casting behavior for the type. This is the case for Type* when using 'as' it may return a different object - the 'canonical type' for the Type*. For a concrete example take 'NamedExpressionType', it's canonical type is the type the name relates to. If you use 'as' on it - it will produce a pointer to a different object, an object that will not be castable back into a NamedExpressionType.
So why have 'as' and 'dynamicCast' - they seem sort of similar? The primary difference is dynamicCast *must* always return a pointer to the same object, whilst 'as' *can* return a pointer to a different object if that is the desired 'normal' casting behavior for the type. This is the case for Type* when using 'as' it may return a different object - the 'canonical type' for the Type*. For a concrete example take 'NamedExpressionType', its canonical type is the type the name relates to. If you use 'as' on it - it will produce a pointer to a different object, an object that will not be castable back into a NamedExpressionType.

Also keep in mind that 'as' behavior is based on the pointer type being cast from. For any pointer to a type derived from Type it will cast the canonical type. **BUT** if the pointer is pointing to a Type derived *object*, but the pointer type is *not* derived from Type (like say RefObject*), then 'as' will behave like dynamicCast.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/coding-conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ As a general rule, be skeptical of "modern C++" ideas unless they are clearly be
We are not quite in the realm of "Orthodox C++", but some of the same guidelines apply:

* Don't use exceptions for non-fatal errors (and even then support a build flag to opt out of exceptions)
* Don't the built-in C++ RTTI system (home-grown is okay)
* Don't use the built-in C++ RTTI system (home-grown is okay)
* Don't use the C++ variants of C headers (e.g., `<cstdio>` instead of `<stdio.h>`)
* Don't use the STL containers
* Don't use iostreams
Expand Down
4 changes: 2 additions & 2 deletions docs/design/decl-refs.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Cell a = { 3 };
int b = a.value + 4;
```

In this case, the expression node for `a.value` can directly reference the declaration of the field `Cell::value`, and from that we can conclude that the type of the field (and hence the expression) is `int.
In this case, the expression node for `a.value` can directly reference the declaration of the field `Cell::value`, and from that we can conclude that the type of the field (and hence the expression) is `int`.

In contrast, things get more complicated as soon as we have a language with generics:

Expand Down Expand Up @@ -158,7 +158,7 @@ There are many queries like "what is the return type of this function?" that typ
The `syntax.h` file defines helpers for most of the existing declaration AST nodes for querying properties that should represent substitutions (the type of a variable, the return type of a function, etc.).
If you are writing code that is working with a `DeclRef`, try to use these accessors and avoid being tempted to extract the bare declaration and start querying it.

Some things like `Modifier`s aren't (currently) affected by substitutions, so it can make sense to query them on a bare declaration instead of a `DeclRef.
Some things like `Modifier`s aren't (currently) affected by substitutions, so it can make sense to query them on a bare declaration instead of a `DeclRef`.

Conclusion
----------
Expand Down
18 changes: 9 additions & 9 deletions docs/design/existential-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ A C++ class or COM component can implement an existential type, with the constra
Many modern languages (e.g., Go) support adapting existing types to new interfaces, so that a "pointer" of interface type is actually a fat pointer: one for the object, and one for the interface dispatch table.
Our examples so far have assumed that the type `T` needs to be passed around separately from the witness table `W`, but that isn't strictly required in some implementations.
In type theory, the most important operation you can do with an existential type is to "open" it, which means to have a limited scope in which you can refer to the constinuent pieces of a "bundled up" value of a type like `IImage`.
In type theory, the most important operation you can do with an existential type is to "open" it, which means to have a limited scope in which you can refer to the constituent pieces of a "bundled up" value of a type like `IImage`.
We could imagine "opening" an existential as something like:
```
Expand All @@ -97,7 +97,7 @@ void myFunc(IImage img)
// and `obj` is a value of type `T`.
//
doSomethingCool<T>(obj);
}
}
}
```
Expand Down Expand Up @@ -125,18 +125,18 @@ Knowing the implementation strategy outline above, we can re-phrase this questio
For simple interfaces this is sometimes possible, but in the general case there are other desirable language features that get in the way:
* When an interface has associated types, there is no type that can be chosen as the associated type for the interface's existential type. The "obvious" approach of using the constraints on the associatd type can lead to unsound logic when interface methods take associated types as parameters.
* When an interface has associated types, there is no type that can be chosen as the associated type for the interface's existential type. The "obvious" approach of using the constraints on the associated type can lead to unsound logic when interface methods take associated types as parameters.
* When an interface uses the "this type" (e.g., an `IComparable` interface with a `compareTo(ThisType other)` method), it isn't correct to simplify the this type to the interface type (just because you have to `IComarable` values doesn't mean you can compare them - they have to be of the same concrete type!)
* When an interface uses the "this type" (e.g., an `IComparable` interface with a `compareTo(ThisType other)` method), it isn't correct to simplify the this type to the interface type (just because you have two `IComarable` values doesn't mean you can compare them - they have to be of the same concrete type!)
* If we allow for `static` method on interfaces, then what implementation would we use for these methods on the interface existential type?
* If we allow for `static` method on interfaces, then what implementation would we use for these methods on the interface's existential type?
Encoding Existentials in the IR
-------------------------------
Existentials are encoded in the Slang IR quite simply. We have an operation `makeExistential(T, obj, W)` that takes a type `T`, a value `obj` that must have type `T`, and a witness table `W` that shows how `T` conforms to some interface `I`. The result of the `makeExistential` operation is then a value of the type `I`.
Rather than include an IR operation to "open" an existential, we can instead just provide accessors for the pieces of information in an existential: one to extract the type field, one to extract the value, and one to extract the witness table. These would idomatically be used like:
Rather than include an IR operation to "open" an existential, we can instead just provide accessors for the pieces of information in an existential: one to extract the type field, one to extract the value, and one to extract the witness table. These would idiomatically be used like:
```
let e : ISomeInterface = /* some existential */
Expand Down Expand Up @@ -172,7 +172,7 @@ We require further transformation passes to allow specialization in more general
* Function specialization, is needed so that a function with existential parameters is specialized based on the actual types used at call sites
Transformations just like these are already required when working with resource types (textures/samplers) on targets that don't support first-class computation on resources, so it is possible to share some of the same logic.
Similarly, any effort we put into validation (to ensure that code is written in a way that *can* be simplified) can hopefully be shared between existentials and reources.
Similarly, any effort we put into validation (to ensure that code is written in a way that *can* be simplified) can hopefully be shared between existentials and resources.
Compositions
------------
Expand All @@ -186,7 +186,7 @@ The hardest part of supporting composition of interfaces is actually in how to l
Why are we passing along the type?
----------------------------------
I'm glossing over something pretty significant here, which is why anybody would pass around the type as part of the existential value, when none of our examples so far have made us of it.
I'm glossing over something pretty significant here, which is why anybody would pass around the type as part of the existential value, when none of our examples so far have made use of it.
This sort of thing isn't very important for languages where interface polymorphism is limited to heap-allocated "reference" types (or values that have been "boxed" into reference types), because the dynamic type of an object can almost always be read out of the object itself.
When dealing with a value type, though, we have to deal with things like making *copies*:
Expand Down Expand Up @@ -235,7 +235,7 @@ This is the reason for passing through the type `T` as part of an existential va
If we only wanted to deal with reference types, this would all be greatly simplified, because the `sizeInBytes` and the copy/move semantics would be fixed: everything is a single pointer.
All of the same issues arise if we making copies of existential values:
All of the same issues arise if we're making copies of existential values:
```
IWritable copyAndClobberExistential(IWritable obj)
Expand Down
2 changes: 1 addition & 1 deletion docs/design/interfaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ Bot the use of `&` and `where` are advanced features that we might cut due to im

### Value Parameters

Because HLSL has generics like `vector<float,3>` that already take non-type parameters, the language will need *some* degree of support for generic parameters that arent' types (at least integers need to be supported).
Because HLSL has generics like `vector<float,3>` that already take non-type parameters, the language will need *some* degree of support for generic parameters that aren't types (at least integers need to be supported).
We need syntax for this that doesn't bloat the common case.

In this case, I think that what I've used in the current Slang implementation is reasonable, where a value parameter needs a `let` prefix:
Expand Down
Loading

0 comments on commit 0974463

Please sign in to comment.