Skip to content

Commit 4141913

Browse files
authored
Fix typos (dotnet#69011)
1 parent e2119d4 commit 4141913

File tree

311 files changed

+981
-985
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

311 files changed

+981
-985
lines changed

docs/coding-guidelines/EventLogging.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,4 @@ Event Logging is a mechanism by which CoreClr can provide a variety of informati
1616

1717
# Adding New Logging System
1818

19-
Though the the Event logging system was designed for ETW, the build system provides a mechanism, basically an [adapter script- genEventing.py](../../src/coreclr/scripts/genEventing.py) so that other Logging System can be added and used by CoreClr. An Example of such an extension for [LTTng logging system](https://lttng.org/) can be found in [genLttngProvider.py](../../src/coreclr/scripts/genLttngProvider.py )
19+
Though the Event logging system was designed for ETW, the build system provides a mechanism, basically an [adapter script- genEventing.py](../../src/coreclr/scripts/genEventing.py) so that other Logging System can be added and used by CoreClr. An Example of such an extension for [LTTng logging system](https://lttng.org/) can be found in [genLttngProvider.py](../../src/coreclr/scripts/genLttngProvider.py )

docs/design/coreclr/botr/dac-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ This uses the `RidMap` to lookup the `MethodDesc`. If you look at the definition
119119

120120
This represents a target address, but it's not really a pointer; it's simply a number (although it represents an address). The problem is that `LookupMethodDef` needs to return the address of a `MethodDesc` that we can dereference. To accomplish this, the function uses a `dac_cast` to `PTR_MethodDesc` to convert the `TADDR` to a `PTR_MethodDesc`. You can think of this as the target address space form of a cast from `void *` to `MethodDesc *`. In fact, this code would be slightly cleander if `GetFromRidMap` returned a `PTR_VOID` (with pointer semantics) instead of a `TADDR` (with integer semantics). Again, the type conversion implicit in the return statement ensures that the DAC marshals the object (if necessary) and returns the host address of the `MethodDesc` in the DAC cache.
121121

122-
The assignment statement in `GetFromRidMap` indexes an array to get a particular value. The `pMap` parameter is the address of a structure field from the `MethodDesc`. As such, the DAC will have copied the entire field into the cache when it marshaled the `MethodDesc` instance. Thus, `pMap`, which is the address of this struct, is a host pointer. Dereferencing it does not involve the DAC at all. The `pTable` field, however, is a `PTR_TADDR`. What this tells us is that `pTable` is an array of target addresses, but its type indicates that it is a marshaled type. This means that `pTable` will be a target address as well. We dereference it with the overloaded indexing operator for the `PTR` type. This will get the target address of the array and compute the target address of the element we want. The last step of indexing marshals the array element back to a host instance in the DAC cache and returns its value. We assign the the element (a `TADDR`) to the local variable result and return it.
122+
The assignment statement in `GetFromRidMap` indexes an array to get a particular value. The `pMap` parameter is the address of a structure field from the `MethodDesc`. As such, the DAC will have copied the entire field into the cache when it marshaled the `MethodDesc` instance. Thus, `pMap`, which is the address of this struct, is a host pointer. Dereferencing it does not involve the DAC at all. The `pTable` field, however, is a `PTR_TADDR`. What this tells us is that `pTable` is an array of target addresses, but its type indicates that it is a marshaled type. This means that `pTable` will be a target address as well. We dereference it with the overloaded indexing operator for the `PTR` type. This will get the target address of the array and compute the target address of the element we want. The last step of indexing marshals the array element back to a host instance in the DAC cache and returns its value. We assign the element (a `TADDR`) to the local variable result and return it.
123123

124124
Finally, to get the code address, the DAC/DBI interface function will call `MethodDesc::GetNativeCode`. This function returns a value of type `PCODE`. This type is a target address, but one that we cannot dereference (it is just an alias of `TADDR`) and one that we use specifically to specify a code address. We store this value on the `ICorDebugFunction` instance and return it to the debugger.
125125

docs/design/coreclr/botr/managed-type-system.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ Exception throwing within the type system is wrapped in a `ThrowHelper` class. T
101101

102102
The type system provides a default implementation of the `ThrowHelper` class that throws exceptions deriving from a `TypeSystemException` exception base class. This default implementation is suitable for use in non-runtime scenarios.
103103

104-
The exception messages are assigned string IDs and get consumed by the throw helper as well. We require this indirection to support the compiler scenarios: when a type loading exception occurs during an AOT compilation, the AOT compiler has two tasks - emit a warning to warn the user that this occured, and potentially generate a method body that will throw this exception at runtime when the problematic type is accessed. The localization of the compiler might not match the localization of the class library the compiler output is linking against. Indirecting the actual exception message through the string ID lets us wrap this. The consumer of the type system may reuse the throw helper in places outside the type system where this functionality is needed.
104+
The exception messages are assigned string IDs and get consumed by the throw helper as well. We require this indirection to support the compiler scenarios: when a type loading exception occurs during an AOT compilation, the AOT compiler has two tasks - emit a warning to warn the user that this occurred, and potentially generate a method body that will throw this exception at runtime when the problematic type is accessed. The localization of the compiler might not match the localization of the class library the compiler output is linking against. Indirecting the actual exception message through the string ID lets us wrap this. The consumer of the type system may reuse the throw helper in places outside the type system where this functionality is needed.
105105

106106
## Physical architecture
107107

docs/design/coreclr/jit/variabletracking.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ On `BasicBlock` boundaries:
198198
This is handled in `LinearScan::recordVarLocationsAtStartOfBB(BasicBlock* bb)`.
199199

200200
- If a variable doesn't have an open `VariableLiveRange` and is in `bbLiveIn`, we open one.
201-
This is done in `genUpdateLife` immediately after the the previous method is called.
201+
This is done in `genUpdateLife` immediately after the previous method is called.
202202

203203
- If a variable has an open `VariableLiveRange` and is not in `bbLiveIn`, we close it.
204204
This is handled in `genUpdateLife` too.

docs/design/coreclr/profiling/IL Rewriting Basics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ One of the common use cases of the `ICorProfiler*` interfaces is to perform IL r
1111
There are two ways to rewrite IL
1212

1313
1. At Module load time with `ICorProfilerInfo::SetILFunctionBody`
14-
This approach has the benefit that it is 'set it and forget it'. You can replace the IL at module load, and the runtime will treat this new IL as if the module contained that IL - you don't have to worry about any of the quirks of ReJIT. The downside is that is is unrevertable - once it is set, you cannot change your mind.
14+
This approach has the benefit that it is 'set it and forget it'. You can replace the IL at module load, and the runtime will treat this new IL as if the module contained that IL - you don't have to worry about any of the quirks of ReJIT. The downside is that it is unrevertable - once it is set, you cannot change your mind.
1515

1616
2. At any point during the process lifetime with `ICorProfilerInfo4::RequestReJIT` or `ICorProfilerInfo10::RequestReJITWithInliners`.
1717
This approach means that you can modify functions in response to changing conditions, and you can revert the modified code if you decide you are done with it. See the other entries about ReJIT in this folder for more information.

docs/design/features/event-counter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ When EventCounter was first designed, it was tailored towards aggregating a set
1414

1515
### Multi-client support ###
1616

17-
**Emit data to all sessions at the rates requested by all clients** - This requires a little extra complexity in the runtime to maintain potentially multiple concurrent aggregations, and it is more verbose in the event stream if that is occuring. Clients need to filter out responses that don't match their requested rate, which is a little more complex than ideal, but still simpler than needing to synthesize statistics. In the case of multiple clients we can still encourage people to use a few canonical rates such as per-second, per-10 seconds, per-minute, per-hour which makes it likely that similar use cases will be able to share the exact same set of events. In the worst case that a few different aggregations are happening in parallel the overhead of our common counter aggregations shouldn't be that high, otherwise they weren't very suitable for lightweight monitoring in the first place. In terms of runtime code complexity I think the difference between supporting 1 aggregation and N aggregations is probably <50 lines per counter type and we only have a few counter types.
17+
**Emit data to all sessions at the rates requested by all clients** - This requires a little extra complexity in the runtime to maintain potentially multiple concurrent aggregations, and it is more verbose in the event stream if that is occurring. Clients need to filter out responses that don't match their requested rate, which is a little more complex than ideal, but still simpler than needing to synthesize statistics. In the case of multiple clients we can still encourage people to use a few canonical rates such as per-second, per-10 seconds, per-minute, per-hour which makes it likely that similar use cases will be able to share the exact same set of events. In the worst case that a few different aggregations are happening in parallel the overhead of our common counter aggregations shouldn't be that high, otherwise they weren't very suitable for lightweight monitoring in the first place. In terms of runtime code complexity I think the difference between supporting 1 aggregation and N aggregations is probably <50 lines per counter type and we only have a few counter types.
1818

1919
Doing the filtering requires that each client can identify which EventCounter data packets are the ones it asked for and which are unrelated. Using IntervalSec as I had originally intended does not work because IntervalSec contains the exact amount of time measured in each interval rather than the nominal interval the client requested. For example a client that asks for EventCounterIntervalSec=1 could see packets that have IntervalSec=1.002038, IntervalSec=0.997838, etc. To resolve this we will add another key/pair to the payload, Series="Interval=T", where T is the number of seconds that was passed to EventCounterIntervalSec. To ensure clients with basically the same needs don't arbitrarily create different series that are identical or near identical we enforce that IntervalSec is always a whole non-negative number of seconds. Any value that can't be parsed by uint.TryParse() will be interpreted the same as IntervalSec=0. Using leading zeros on the number, ie IntervalSec=0002 may or may not work so clients are discouraged from doing so (in practice, its whatever text uint.TryParse handles).
2020

docs/design/features/hosting-layer-apis.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ int hostfxr_get_runtime_properties(
208208
```
209209
Get all runtime properties for the specified host context.
210210
* `host_context_handle` - initialized host context. If set to `nullptr` the function will operate on the first host context in the process.
211-
* `count` - in/out parameter which must not be `nullptr`. On input it specifies the size of the the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned.
211+
* `count` - in/out parameter which must not be `nullptr`. On input it specifies the size of the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned.
212212
* `keys` - buffer which acts as an array of pointers to buffers with keys for the runtime properties.
213213
* `values` - buffer which acts as an array of pointer to buffers with values for the runtime properties.
214214
@@ -259,7 +259,7 @@ int corehost_load(host_interface_t *init)
259259
Initialize `hostpolicy`. This stores information that will be required to do all the processing necessary to start CoreCLR, but it does not actually do any of that processing.
260260
* `init` - structure defining how the library should be initialized
261261
262-
If already initalized, this function returns success without reinitializing (`init` is ignored).
262+
If already initialized, this function returns success without reinitializing (`init` is ignored).
263263
264264
``` C
265265
int corehost_main(const int argc, const char_t* argv[])

docs/design/features/native-hosting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -334,7 +334,7 @@ int hostfxr_get_runtime_properties(
334334
335335
Returns the full set of all runtime properties for the specified host context.
336336
* `host_context_handle` - the initialized host context. If set to `NULL` the function will operate on runtime properties of the first host context in the process.
337-
* `count` - in/out parameter which must not be `NULL`. On input it specifies the size of the the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned. If the size of the buffers is too small, the function returns a specific error code and fill the `count` with the number of available properties. If `keys` or `values` is `NULL` the function ignores the input value of `count` and just returns the number of properties.
337+
* `count` - in/out parameter which must not be `NULL`. On input it specifies the size of the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned. If the size of the buffers is too small, the function returns a specific error code and fill the `count` with the number of available properties. If `keys` or `values` is `NULL` the function ignores the input value of `count` and just returns the number of properties.
338338
* `keys` - buffer which acts as an array of pointers to buffers with keys for the runtime properties.
339339
* `values` - buffer which acts as an array of pointer to buffers with values for the runtime properties.
340340

docs/design/features/standalone-gc-loading.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@ GC.
177177
## Outstanding Questions
178178
179179
How can we provide the most useful error message when a standalone GC fails to load? In the past it has been difficult
180-
to determine what preciscely has gone wrong with `coreclr_initialize` returns a HRESULT and no indication of what occured.
180+
to determine what preciscely has gone wrong with `coreclr_initialize` returns a HRESULT and no indication of what occurred.
181181
182182
Same question for the DAC - Is `E_FAIL` the best we can do? If we could define our own error for DAC/GC version
183183
mismatches, that would be nice; however, that is technically a breaking change in the DAC.

docs/design/features/tiered-compilation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ There are two mechanisms that need to be satisfied in order for a Tier0 method t
6666

6767
1. The method needs to be called at least 30 times, as measured by the call counter, and this gives us a rough notion that the method is 'hot'. The number 30 was derived with a small amount of early empirical testing but there hasn't been a large amount of effort applied in checking if the number is optimal. We assumed that both the policy and the sample benchmarks we were measuring would be in a state of flux for a while to come so there wasn't much reason to spend a lot of time finding the exact maximum of a shifting curve. As best we can tell there is also not a steep response between changes in this value and changes in the performance of many scenarios. An order of magnitude should produce a notable difference but +-5 can vanish into the noise.
6868

69-
2. At startup a timer is initiated with a 100ms timeout. If any Tier0 jitting occurs while the timer is running then it is reset. If the timer completes without any Tier0 jitting then, and only then, is call counting allowed to commence. This means a method could be called 1000 times in the first 100ms, but the timer will still need to expire and have the method called 30 more times before it is eligible for Tier1. The reason for the timer is to measure whether or not Tier0 jitting is still occuring, which is a heuristic to measure whether or not the application is still in its startup phase. Before adding the timer we observed that both the call counter and background threads compiling Tier1 code versions were slowing down the foreground threads trying to complete startup, and this could result in losing all the startup performance wins from Tier0 jitting. By delaying until after 'startup' the Tier0 code is left running longer, but that was nearly always a better performing outcome than trying to replace it with Tier1 code too eagerly.
69+
2. At startup a timer is initiated with a 100ms timeout. If any Tier0 jitting occurs while the timer is running then it is reset. If the timer completes without any Tier0 jitting then, and only then, is call counting allowed to commence. This means a method could be called 1000 times in the first 100ms, but the timer will still need to expire and have the method called 30 more times before it is eligible for Tier1. The reason for the timer is to measure whether or not Tier0 jitting is still occurring, which is a heuristic to measure whether or not the application is still in its startup phase. Before adding the timer we observed that both the call counter and background threads compiling Tier1 code versions were slowing down the foreground threads trying to complete startup, and this could result in losing all the startup performance wins from Tier0 jitting. By delaying until after 'startup' the Tier0 code is left running longer, but that was nearly always a better performing outcome than trying to replace it with Tier1 code too eagerly.
7070

7171
After these two conditions are satisfied the method is placed in a queue for Tier1 compilation, compiled on a background thread, and then the Tier1 version is made active.
7272

0 commit comments

Comments
 (0)