Skip to content

Commit

Permalink
chore: move tests from akka-projection-grpc into own submodule (#796)
Browse files Browse the repository at this point in the history
* chore: move tests from akka-projection-grpc into own submodule
  • Loading branch information
sebastian-alfers committed Feb 20, 2023
1 parent 74aaaca commit c42f74e
Show file tree
Hide file tree
Showing 30 changed files with 28 additions and 17 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/integration-tests-grpc.yml
Expand Up @@ -49,7 +49,7 @@ jobs:
jvm: ${{ matrix.jvmName }}

- name: Run all integration tests with default Scala and Java ${{ matrix.jdkVersion }}
run: sbt "akka-projection-grpc/It/test" ${{ matrix.extraOpts }}
run: sbt "akka-projection-grpc-tests/It/test" ${{ matrix.extraOpts }}
env: # Disable Ryuk resource reaper since we always spin up fresh VMs
TESTCONTAINERS_RYUK_DISABLED: true

Expand Down
15 changes: 12 additions & 3 deletions build.sbt
Expand Up @@ -99,12 +99,21 @@ lazy val `durable-state` =

lazy val grpc =
Project(id = "akka-projection-grpc", base = file("akka-projection-grpc"))
.configs(IntegrationTest)
.settings(headerSettings(IntegrationTest))
.settings(Defaults.itSettings)
.settings(Dependencies.grpc)
.dependsOn(core)
.dependsOn(eventsourced)
.enablePlugins(AkkaGrpcPlugin)
.settings(akkaGrpcCodeGeneratorSettings += "server_power_apis", IntegrationTest / fork := true)

lazy val grpcTests =
Project(id = "akka-projection-grpc-tests", base = file("akka-projection-grpc-tests"))
.configs(IntegrationTest)
.settings(headerSettings(IntegrationTest))
.disablePlugins(MimaPlugin)
.settings(Defaults.itSettings)
.settings(Dependencies.grpcTest)
.settings(publish / skip := true)
.dependsOn(grpc)
.dependsOn(testkit % Test)
.enablePlugins(AkkaGrpcPlugin)
.settings(akkaGrpcCodeGeneratorSettings += "server_power_apis", IntegrationTest / fork := true)
Expand Down
24 changes: 12 additions & 12 deletions docs/src/main/paradox/grpc-replicated-event-sourcing-transport.md
@@ -1,11 +1,11 @@
# Akka Replicated Event Sourcing over gRPC

Akka Replicated Event Sourcing extends Akka Persistence allowing multiple replicas of the same entity, all accepting
writes, for example in different data centers or cloud provider regions. This makes it possible to implement patterns
such as active-active and hot standby.
writes, for example in different data centers or cloud provider regions. This makes it possible to implement patterns
such as active-active and hot standby.

Originally, Akka Replicated Event Sourcing has required cross-replica access to the underlying replica database, which
can be hard to open up for security and infrastructure reasons. It was also easiest to use in an
can be hard to open up for security and infrastructure reasons. It was also easiest to use in an
[Akka Multi DC Cluster](https://doc.akka.io/docs/akka/current/typed/cluster-dc.html) setup
where a single cluster spans multiple datacenters or regions, another thing that can be complicated to allow.

Expand Down Expand Up @@ -103,18 +103,18 @@ Java

### Settings

The @apidoc[akka.projection.grpc.replication.*.ReplicationSettings] @scala[`apply`]@java[`create`] factory methods can
The @apidoc[akka.projection.grpc.replication.*.ReplicationSettings] @scala[`apply`]@java[`create`] factory methods can
accept an entity name, a @apidoc[ReplicationProjectionProvider] and an actor system. The configuration of that system
is expected to have a top level entry with the entity name containing this structure:

Scala
: @@snip [config](/akka-projection-grpc/src/test/scala/akka/projection/grpc/replication/ReplicationSettingsSpec.scala) { #config }
: @@snip [config](/akka-projection-grpc-tests/src/test/scala/akka/projection/grpc/replication/ReplicationSettingsSpec.scala) { #config }

Java
: @@snip [config](/akka-projection-grpc/src/test/scala/akka/projection/grpc/replication/ReplicationSettingsSpec.scala) { #config }
: @@snip [config](/akka-projection-grpc-tests/src/test/scala/akka/projection/grpc/replication/ReplicationSettingsSpec.scala) { #config }

The entries in the block refer to the local replica while `replicas` is a list of all replicas, including the node itself,
with details about how to reach the replicas across the network.
The entries in the block refer to the local replica while `replicas` is a list of all replicas, including the node itself,
with details about how to reach the replicas across the network.

The `grpc.client` section for each of the replicas is used for setting up the Akka gRPC client and supports the same discovery, TLS
and other connection options as when using Akka gRPC directly. For more details see @extref:[Akka gRPC configuration](akka-grpc:client/configuration.html#by-configuration).
Expand Down Expand Up @@ -150,10 +150,10 @@ be passed at once to `EventProducer.grpcServiceHandler` to create a single produ
streams.

Scala
: @@snip [ProducerApiSample.scala](/akka-projection-grpc/src/test/scala/akka/projection/grpc/replication/scaladsl/ProducerApiSample.scala) { #multi-service }
: @@snip [ProducerApiSample.scala](/akka-projection-grpc-tests/src/test/scala/akka/projection/grpc/replication/scaladsl/ProducerApiSample.scala) { #multi-service }

Java
: @@snip [ReplicationCompileTest.java](/akka-projection-grpc/src/test/java/akka/projection/grpc/replication/javdsl/ReplicationCompileTest.java) { #multi-service }
: @@snip [ReplicationCompileTest.java](/akka-projection-grpc-tests/src/test/java/akka/projection/grpc/replication/javdsl/ReplicationCompileTest.java) { #multi-service }


The Akka HTTP server must be running with HTTP/2, this is done through config:
Expand All @@ -167,13 +167,13 @@ Java
### Serialization of events

The events are serialized for being passed over the wire using the same Akka serializer as configured for serializing
the events for storage.
the events for storage.

Note that having separate replicas increases the risk that two different serialized formats and versions of the serializer
are running at the same time, so extra care must be taken when changing the events and their serialization and deploying
new versions of the application to the replicas.

For some scenarios it may be necessary to do a two-step deploy of format changes to not lose data, first deploy support
For some scenarios it may be necessary to do a two-step deploy of format changes to not lose data, first deploy support
for a new serialization format so that all replicas can deserialize it, then a second deploy where the new field is actually
populated with data.

Expand Down
4 changes: 3 additions & 1 deletion project/Dependencies.scala
Expand Up @@ -211,7 +211,9 @@ object Dependencies {
Compile.akkaPersistenceTyped,
Compile.akkaPersistenceQuery,
// Only needed for Replicated Event Sourcing over gRPC
Compile.akkaClusterShardingTyped % Optional,
Compile.akkaClusterShardingTyped % Optional)

val grpcTest = deps ++= Seq(
Test.akkaProjectionR2dbc,
Test.postgresDriver,
Test.akkaShardingTyped,
Expand Down

0 comments on commit c42f74e

Please sign in to comment.