Some advice for Alpakka contributors
To inspect how all of the below listed guidelines are to be implemented in practice, take a look at the reference connector. Feel free to use the reference connector as a starting point for new connectors.
Public factory methods
Depending on the technology you integrate with Akka Streams and Alpakka you'll create Sources, Flows and Sinks. Regardless on how they are implemented make sure that you create the relevant Sources, Sinks and Flows APIs so they are simple and easy to use.
Reference connector model classes
When designing Flows, consider adding an extra field to the in- and out-messages which is passed through. A common use case we see, is committing a Kafka offset after passing data to another system.
Implementing the Java API in Scala
Reference connector Java API factory methods
Alpakka, same as Akka, aims to keep 100% feature parity between the various language DSLs. Implementing even the API for Java in Scala has proven the most viable way to do it, as long as you keep the following in mind:
Keep entry points separated in
Provide factory methods for Sources, Flows and Sinks in the
javadslpackage wrapping all the methods in the Scala API. The Akka Stream Scala instances have a
.asJavamethod to convert to the
When using Scala
objectinstances, offer a
getInstance()method and add a sealed abstract class (to support 2.11) to get the return type.
When the Scala API contains an
createfor Java users.
Do not nest Scala
objects more than two levels (as access from Java becomes weird)
Be careful to convert values within data structures (eg. for
When compiling with both Scala 2.11 and 2.12, some methods considered overloads in 2.11, become ambiguous in 2.12 as both may be functional interfaces.
Complement any methods with Scala collections with a Java collection version
akka.japi.Pairclass to return tuples
If the underlying Scala code requires an
ExecutionContext, make the Java API take an
Make use of
scala-java8-compatconversions, see GitHub (eg.
scala.compat.java8.FutureConvertersto translate Futures to
Overview of Scala types and their Java counterparts
Reference connector settings classes
Most technologies will have a couple of configuration settings that will be needed for several Sinks, Flows, or Sinks. Create case classes collecting these settings instead of passing them in every method.
Create a default instance in the companion object with good defaults which can be updated via
withXxxx methods to specify certain fields in the settings instance.
In case you see the need to support reading the settings from
Config, offer a method taking the
Config instance so
that the user can apply a proper namespace.
Refrain from using
akka.stream.alpakka as Config prefix, prefer
alpakka as root namespace.
Evolving APIs with binary compatibility
All Akka APIs aim to evolve in a binary compatible way within minor versions.
Do not use any default arguments
Do not use case classes (as the public copy method relies on default arguments)
To generate a case class replacement, consider using Kaze Class
See Binary Compatibilty Rules in the Akka documentation.
Use MigrationManager (MiMa) to validate, if versions are binary compatible. MiMa is part of the Alpakka build and its checks can be triggered by
All the external runtime dependencies for the project, including transitive dependencies, must have an open source license that is equal to, or compatible with, Apache 2.
Which licenses are compatible with Apache 2 are defined in this doc, where you can see that the licenses that are listed under
Category A automatically compatible with Apache 2, while the ones listed under
Category B needs additional action:
Each license in this category requires some degree of reciprocity; therefore, additional action must be taken in order to minimize the chance that a user of an Apache product will create a derivative work of a reciprocally-licensed portion of an Apache product without being aware of the applicable requirements.
Dependency licenses will be checked automatically by the sbt Whitesource plug-in.
Packages & Scoping
final extensively to limit the API surface.
||Java-only part of the API, normally factories for Sources, Flows and Sinks|
||Scala-only part of the API, normally factories for Sources, Flows and Sinks|
||Shared API, eg. settings classes|
||Internal implementation in separate package|
Graph stage checklist
Reference connector operator implementations
- Keep mutable state within the
- Open connections in
- Release resources in
- Fail early on configuration errors
- Make sure the code is thread-safe; if in doubt, please ask!
- No Blocking At Any Time -- in other words, avoid blocking whenever possible and replace it with asynchronous programming (async callbacks, stage actors)
Use of blocking APIs
Many technologies come with client libraries that only support blocking calls. Akka Stream stages that use blocking APIs should preferably be run on Akka's
IODispatcher. (In rare cases you might want to allow the users to configure another dispatcher to run the blocking operations on.)
To select Akka's
IODispatcher for a stage use
override protected def initialAttributes: Attributes = Attributes(ActorAttributes.IODispatcher)
IODispatcher is selected, you do NOT need to wrap the blocking calls in
(Issue akka/akka#25540 requests better support for selecting the correct execution context.)
Keep the code DRY
Avoid duplication of code between different Sources, Sinks and Flows. Extract the common logic to a common abstract
GraphStageLogic that is inherited by the
Sometimes it may be useful to provide a Sink or Source for a connector, even if the main concept is implemented as a Flow. This can be easily done by reusing the Flow implementation:
You do not need to expose every configuration a Flow offers this way -- Focus on the most important ones.
Write tests for all public methods and possible settings.
Use Docker containers for any testing needed by your application. The container's configuration can be added to the
Please ensure that you limit the amount of resources used by the containers.
Reference connector paradox documentation
Using Paradox syntax (which is very close to markdown), create or complement
the documentation in the
Prepare code snippets to be integrated by Paradox in the tests. Such example should be part of real tests and not in
Use ScalaDoc if you see the need to describe the API usage better than the naming does.
sbt docs/Local/paradox to generate reference docs while developing. Generated documentation can be
found in the