Skip to content

Commit

Permalink
Fix some typos/grammar (#24082)
Browse files Browse the repository at this point in the history
  • Loading branch information
Philippus authored and johanandren committed Dec 4, 2017
1 parent cbc1c9a commit 5ef03c4
Show file tree
Hide file tree
Showing 9 changed files with 13 additions and 13 deletions.
2 changes: 1 addition & 1 deletion akka-docs/src/main/paradox/additional/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Akka is also the name of a goddess in the Sámi (the native Swedish population)
mythology. She is the goddess that stands for all the beauty and good in the
world. The mountain can be seen as the symbol of this goddess.

Also, the name AKKA is the a palindrome of letters A and K as in Actor Kernel.
Also, the name AKKA is a palindrome of the letters A and K as in Actor Kernel.

Akka is also:

Expand Down
2 changes: 1 addition & 1 deletion akka-docs/src/main/paradox/additional/osgi.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ FSM of a bundle in an OSGi container:
1. INSTALLED: A bundle that is installed has been loaded from disk and a classloader instantiated with its capabilities.
Bundles are iteratively installed manually or through container-specific descriptors. For those familiar with legacy packging
such as EJB, the modular nature of OSGi means that bundles may be used by multiple applications with overlapping dependencies.
By resolving them individually from repositories, these overlaps can be de-duplicated across multiple deployemnts to
By resolving them individually from repositories, these overlaps can be de-duplicated across multiple deployments to
the same container.
2. RESOLVED: A bundle that has been resolved is one that has had its requirements (imports) satisfied. Resolution does
mean that a bundle can be started.
Expand Down
2 changes: 1 addition & 1 deletion akka-docs/src/main/paradox/cluster-dc.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ and regions, isolated from other data centers. If you start an entity type with
nodes and you have defined 3 different data centers and then send messages to the same entity id to
sharding regions in all data centers you will end up with 3 active entity instances for that entity id,
one in each data center. This is because the region/coordinator is only aware of its own data center
and will activate the entity there. It's unaware of the existence of corresponding entitiy in the
and will activate the entity there. It's unaware of the existence of corresponding entities in the
other data centers.

Especially when used together with Akka Persistence that is based on the single-writer principle
Expand Down
4 changes: 2 additions & 2 deletions akka-docs/src/main/paradox/cluster-sharding.md
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ Note that stopped entities will be started again when a new message is targeted
## Graceful Shutdown

You can send the @scala[`ShardRegion.GracefulShutdown`] @java[`ShardRegion.gracefulShutdownInstance`] message
to the `ShardRegion` actor to handoff all shards that are hosted by that `ShardRegion` and then the
to the `ShardRegion` actor to hand off all shards that are hosted by that `ShardRegion` and then the
`ShardRegion` actor will be stopped. You can `watch` the `ShardRegion` actor to know when it is completed.
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
Expand Down Expand Up @@ -464,7 +464,7 @@ the identifiers of the shards running in a Region and what entities are alive fo
a `ShardRegion.ClusterShardingStats` containing the identifiers of the shards running in each region and a count
of entities that are alive in each shard.

The type names of all started shards can be aquired via @scala[`ClusterSharding.shardTypeNames`] @java[`ClusterSharding.getShardTypeNames`].
The type names of all started shards can be acquired via @scala[`ClusterSharding.shardTypeNames`] @java[`ClusterSharding.getShardTypeNames`].

The purpose of these messages is testing and monitoring, they are not provided to give access to
directly sending messages to the individual entities.
Expand Down
4 changes: 2 additions & 2 deletions akka-docs/src/main/paradox/common/circuitbreaker.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Here's how a `CircuitBreaker` would be configured for:

### Future & Synchronous based API

Once a circuit breaker actor has been intialized, interacting with that actor is done by either using the Future based API or the synchronous API. Both of these APIs are considered `Call Protection` because whether synchronously or asynchronously, the purpose of the circuit breaker is to protect your system from cascading failures while making a call to another service. In the future based API, we use the `withCircuitBreaker` which takes an asynchronous method (some method wrapped in a `Future`), for instance a call to retrieve data from a database, and we pipe the result back to the sender. If for some reason the database in this example isn't responding, or there is another issue, the circuit breaker will open and stop trying to hit the database again and again until the timeout is over.
Once a circuit breaker actor has been initialized, interacting with that actor is done by either using the Future based API or the synchronous API. Both of these APIs are considered `Call Protection` because whether synchronously or asynchronously, the purpose of the circuit breaker is to protect your system from cascading failures while making a call to another service. In the future based API, we use the `withCircuitBreaker` which takes an asynchronous method (some method wrapped in a `Future`), for instance a call to retrieve data from a database, and we pipe the result back to the sender. If for some reason the database in this example isn't responding, or there is another issue, the circuit breaker will open and stop trying to hit the database again and again until the timeout is over.

The Synchronous API would also wrap your call with the circuit breaker logic, however, it uses the `withSyncCircuitBreaker` and receives a method that is not wrapped in a `Future`.

Expand Down Expand Up @@ -154,4 +154,4 @@ The below examples doesn't make a remote call when the state is *HalfOpen*. Usin

#### Java

@@snip [TellPatternJavaActor.java]($code$/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }
@@snip [TellPatternJavaActor.java]($code$/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }
2 changes: 1 addition & 1 deletion akka-docs/src/main/paradox/distributed-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
are prefered over unreachable nodes.
are preferred over unreachable nodes.

Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.

Expand Down
4 changes: 2 additions & 2 deletions akka-docs/src/main/paradox/persistence.md
Original file line number Diff line number Diff line change
Expand Up @@ -842,8 +842,8 @@ Event Adapters help in situations where:
* **Version Migrations** – existing events stored in *Version 1* should be "upcasted" to a new *Version 2* representation,
and the process of doing so involves actual code, not just changes on the serialization layer. For these scenarios
the `toJournal` function is usually an identity function, however the `fromJournal` is implemented as
`v1.Event=>v2.Event`, performing the neccessary mapping inside the fromJournal method.
This technique is sometimes refered to as "upcasting" in other CQRS libraries.
`v1.Event=>v2.Event`, performing the necessary mapping inside the fromJournal method.
This technique is sometimes referred to as "upcasting" in other CQRS libraries.
* **Separating Domain and Data models** – thanks to EventAdapters it is possible to completely separate the domain model
from the model used to persist data in the Journals. For example one may want to use case classes in the
domain model, however persist their protocol-buffer (or any other binary serialization format) counter-parts to the Journal.
Expand Down
4 changes: 2 additions & 2 deletions akka-docs/src/main/paradox/scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ a known datastructure and algorithm for handling such use cases, refer to the [H
whitepaper by Varghese and Lauck if you'd like to understand its inner workings.

The Akka scheduler is **not** designed for long-term scheduling (see [akka-quartz-scheduler](https://github.com/enragedginger/akka-quartz-scheduler)
instead for this use case) nor is it to be used for higly precise firing of the events.
instead for this use case) nor is it to be used for highly precise firing of the events.
The maximum amount of time into the future you can schedule an event to trigger is around 8 months,
which in practice is too much to be useful since this would assume the system never went down during that period.
If you need long-term scheduling we highly recommend looking into alternative schedulers, as this
Expand Down Expand Up @@ -119,4 +119,4 @@ scheduled task was canceled or will (eventually) have run.

@@@

@@snip [Scheduler.scala]($akka$/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #cancellable }
@@snip [Scheduler.scala]($akka$/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #cancellable }
2 changes: 1 addition & 1 deletion akka-docs/src/main/paradox/stream/stream-substream.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Like the `concat` operation on `Flow`, it fully consumes one `Source` after the
So, there is only one substream actively running at a given time.

Then once the active substream is fully consumed, the next substream can start running.
Elements from all the substreams are concatnated to the sink.
Elements from all the substreams are concatenated to the sink.

![stream-substream-flatMapConcat2.png](../../images/stream-substream-flatMapConcat2.png)

Expand Down

0 comments on commit 5ef03c4

Please sign in to comment.