Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unidoc directive for markdowns #24563

Merged
merged 8 commits into from
Mar 1, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 25 additions & 25 deletions akka-docs/src/main/paradox/cluster-client.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Cluster Client

An actor system that is not part of the cluster can communicate with actors
somewhere in the cluster via this @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)]. The client can of course be part of
somewhere in the cluster via this @unidoc[ClusterClient]. The client can of course be part of
another cluster. It only needs to know the location of one (or more) nodes to use as initial
contact points. It will establish a connection to a @scala[@scaladoc[`ClusterReceptionist`](akka.cluster.client.ClusterReceptionist)]@java[@javadoc[`ClusterReceptionist`](akka.cluster.client.ClusterReceptionist)] somewhere in
contact points. It will establish a connection to a @unidoc[akka.cluster.client.ClusterReceptionist] somewhere in
the cluster. It will monitor the connection to the receptionist and establish a new
connection if the link goes down. When looking for a new receptionist it uses fresh
contact points retrieved from previous establishment, or periodically refreshed contacts,
i.e. not necessarily the initial contact points.

@@@ note

@scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] should not be used when sending messages to actors that run
within the same cluster. Similar functionality as the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] is
@unidoc[ClusterClient] should not be used when sending messages to actors that run
within the same cluster. Similar functionality as the @unidoc[ClusterClient] is
provided in a more efficient way by @ref:[Distributed Publish Subscribe in Cluster](distributed-pub-sub.md) for actors that
belong to the same cluster.

Expand All @@ -23,23 +23,23 @@ to `remote` or `cluster` when using
the cluster client.

The receptionist is supposed to be started on all nodes, or all nodes with specified role,
in the cluster. The receptionist can be started with the @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)] extension
in the cluster. The receptionist can be started with the @unidoc[akka.cluster.client.ClusterReceptionist] extension
or as an ordinary actor.

You can send messages via the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] to any actor in the cluster that is registered
in the @scala[@scaladoc[`DistributedPubSubMediator`](akka.cluster.pubsub.DistributedPubSubMediator)]@java[@javadoc[`DistributedPubSubMediator`](akka.cluster.pubsub.DistributedPubSubMediator)] used by the @scala[@scaladoc[`ClusterReceptionist`](akka.cluster.client.ClusterReceptionist)]@java[@javadoc[`ClusterReceptionist`](akka.cluster.client.ClusterReceptionist)].
The @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)] provides methods for registration of actors that
You can send messages via the @unidoc[ClusterClient] to any actor in the cluster that is registered
in the @unidoc[DistributedPubSubMediator] used by the @unidoc[akka.cluster.client.ClusterReceptionist].
The @unidoc[ClusterClientReceptionist] provides methods for registration of actors that
should be reachable from the client. Messages are wrapped in `ClusterClient.Send`,
@scala[@scaladoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient$)]@java[`ClusterClient.SendToAll`] or @scala[@scaladoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient$)]@java[`ClusterClient.Publish`].

Both the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] and the @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClient)] emit events that can be subscribed to.
The @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] sends out notifications in relation to having received a list of contact points
from the @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]. One use of this list might be for the client to record its
Both the @unidoc[ClusterClient] and the @unidoc[ClusterClientReceptionist] emit events that can be subscribed to.
The @unidoc[ClusterClient] sends out notifications in relation to having received a list of contact points
from the @unidoc[ClusterClientReceptionist]. One use of this list might be for the client to record its
contact points. A client that is restarted could then use this information to supersede any previously
configured contact points.

The @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)] sends out notifications in relation to having received a contact
from a @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)]. This notification enables the server containing the receptionist to become aware of
The @unidoc[ClusterClientReceptionist] sends out notifications in relation to having received a contact
from a @unidoc[ClusterClient]. This notification enables the server containing the receptionist to become aware of
what clients are connected.

1. **ClusterClient.Send**
Expand Down Expand Up @@ -68,13 +68,13 @@ to avoid inbound connections from other cluster nodes to the client:
* @scala[@scaladoc[`sender()`](akka.actor.Actor)] @java[@javadoc[`getSender()`](akka.actor.Actor)] of the response messages, sent back from the destination and seen by the client,
is `deadLetters`

since the client should normally send subsequent messages via the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)].
since the client should normally send subsequent messages via the @unidoc[ClusterClient].
It is possible to pass the original sender inside the reply messages if
the client is supposed to communicate directly to the actor in the cluster.

While establishing a connection to a receptionist the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] will buffer
While establishing a connection to a receptionist the @unidoc[ClusterClient] will buffer
messages and send them when the connection is established. If the buffer is full
the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] will drop old messages when new messages are sent via the client.
the @unidoc[ClusterClient] will drop old messages when new messages are sent via the client.
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.

It's worth noting that messages can always be lost because of the distributed nature
Expand All @@ -98,7 +98,7 @@ Scala
Java
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #server }

On the client you create the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] actor and use it as a gateway for sending
On the client you create the @unidoc[ClusterClient] actor and use it as a gateway for sending
messages to the actors identified by their path (without address information) somewhere
in the cluster.

Expand Down Expand Up @@ -130,7 +130,7 @@ That is convenient and perfectly fine in most cases, but it can be good to know
start the `akka.cluster.client.ClusterReceptionist` actor as an ordinary actor and you can have several
different receptionists at the same time, serving different types of clients.

Note that the @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)] uses the @scala[@scaladoc[`DistributedPubSub`](akka.cluster.pubsub.DistributedPubSub)]@java[@javadoc[`DistributedPubSub`](akka.cluster.pubsub.DistributedPubSub)] extension, which is described
Note that the @unidoc[ClusterClientReceptionist] uses the @unidoc[DistributedPubSub] extension, which is described
in @ref:[Distributed Publish Subscribe in Cluster](distributed-pub-sub.md).

It is recommended to load the extension when the actor system is started by defining it in the
Expand All @@ -142,9 +142,9 @@ akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]

## Events

As mentioned earlier, both the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)] and @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)] emit events that can be subscribed to.
As mentioned earlier, both the @unidoc[ClusterClient] and @unidoc[ClusterClientReceptionist] emit events that can be subscribed to.
The following code snippet declares an actor that will receive notifications on contact points (addresses to the available
receptionists), as they become available. The code illustrates subscribing to the events and receiving the @scala[@scaladoc[`ClusterClient`](akka.cluster.client.ClusterClient)]@java[@javadoc[`ClusterClient`](akka.cluster.client.ClusterClient)]
receptionists), as they become available. The code illustrates subscribing to the events and receiving the @unidoc[ClusterClient]
initial state.

Scala
Expand All @@ -153,7 +153,7 @@ Scala
Java
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #clientEventsListener }

Similarly we can have an actor that behaves in a similar fashion for learning what cluster clients are connected to a @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]:
Similarly we can have an actor that behaves in a similar fashion for learning what cluster clients are connected to a @unidoc[ClusterClientReceptionist]:

Scala
: @@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #receptionistEventsListener }
Expand Down Expand Up @@ -186,14 +186,14 @@ Maven
<a id="cluster-client-config"></a>
## Configuration

The @scala[@scaladoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)]@java[@javadoc[`ClusterClientReceptionist`](akka.cluster.client.ClusterClientReceptionist)] extension (or @scala[@scaladoc[`ClusterReceptionistSettings`](akka.cluster.client.ClusterReceptionistSettings)]@java[@javadoc[`ClusterReceptionistSettings`](akka.cluster.client.ClusterReceptionistSettings)]) can be configured
The @unidoc[ClusterClientReceptionist] extension (or @unidoc[ClusterReceptionistSettings]) can be configured
with the following properties:

@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #receptionist-ext-config }

The following configuration properties are read by the @scala[@scaladoc[`ClusterClientSettings`](akka.cluster.client.ClusterClientSettings)]@java[@javadoc[`ClusterClientSettings`](akka.cluster.client.ClusterClientSettings)]
when created with a @scala[@scaladoc[`ActorSystem`](akka.actor.ActorSystem)]@java[@javadoc[`ActorSystem`](akka.actor.ActorSystem)] parameter. It is also possible to amend the @scala[@scaladoc[`ClusterClientSettings`](akka.cluster.client.ClusterClientSettings)]@java[@javadoc[`ClusterClientSettings`](akka.cluster.client.ClusterClientSettings)]
or create it from another config section with the same layout as below. @scala[@scaladoc[`ClusterClientSettings`](akka.cluster.client.ClusterClientSettings)]@java[@javadoc[`ClusterClientSettings`](akka.cluster.client.ClusterClientSettings)] is
The following configuration properties are read by the @unidoc[ClusterClientSettings]
when created with a @scala[@scaladoc[`ActorSystem`](akka.actor.ActorSystem)]@java[@javadoc[`ActorSystem`](akka.actor.ActorSystem)] parameter. It is also possible to amend the @unidoc[ClusterClientSettings]
or create it from another config section with the same layout as below. @unidoc[ClusterClientSettings] is
a parameter to the @scala[@scaladoc[`ClusterClient.props`](akka.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.props`](akka.cluster.client.ClusterClient$)] factory method, i.e. each client can be configured
with different settings if needed.

Expand Down
3 changes: 2 additions & 1 deletion build.sbt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import akka.AutomaticModuleName
import akka.{ParadoxSupport, AutomaticModuleName}

enablePlugins(UnidocRoot, TimeStampede, UnidocWithPrValidation, NoPublish)
disablePlugins(MimaPlugin)
Expand Down Expand Up @@ -231,6 +231,7 @@ lazy val docs = akkaModule("akka-docs")
deployRsyncArtifact := List((paradox in Compile).value -> s"www/docs/akka/${version.value}")
)
.enablePlugins(AkkaParadoxPlugin, DeployRsync, NoPublish, ParadoxBrowse, ScaladocNoVerificationOfDiagrams)
.settings(ParadoxSupport.paradoxWithCustomDirectives)
.disablePlugins(MimaPlugin, WhiteSourcePlugin)

lazy val multiNodeTestkit = akkaModule("akka-multi-node-testkit")
Expand Down
87 changes: 87 additions & 0 deletions project/ParadoxSupport.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
/*
* Copyright (C) 2016-2018 Lightbend Inc. <https://www.lightbend.com>
*/

package akka

import _root_.io.github.lukehutch.fastclasspathscanner.FastClasspathScanner
import com.lightbend.paradox.markdown._
import com.lightbend.paradox.sbt.ParadoxPlugin.autoImport._
import org.pegdown.Printer
import org.pegdown.ast._
import sbt.Keys._
import sbt._

import scala.collection.JavaConverters._

object ParadoxSupport {
val paradoxWithCustomDirectives = Seq(
paradoxDirectives ++= Def.taskDyn {
val classpath = (fullClasspath in Compile).value.files.map(_.toURI.toURL).toArray
val classloader = new java.net.URLClassLoader(classpath, this.getClass().getClassLoader())
lazy val scanner = new FastClasspathScanner("akka").addClassLoader(classloader).scan()
val allClasses = scanner.getNamesOfAllClasses.asScala.toVector
Def.task { Seq(
{ _: Writer.Context ⇒ new UnidocDirective(allClasses) }
)}
}.value
)

class UnidocDirective(allClasses: IndexedSeq[String]) extends InlineDirective("unidoc") {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is copied from akka/akka-http#1579 removing unnecessary stuff.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What unnecessary stuff could be removed? can we also remove it over there?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary stuff was removed in 4161175 - unused parameters, and case 2 in the pattern match could be handled by case n.

As we are making the new sbt plugin anyway, we don't need to do the same in akka-http? However, if the plugin takes too much time, yes I'll do that in akka-http.

def render(node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
if (node.label.split('[')(0).contains('.')) {
val fqcn = node.label
if (allClasses.contains(fqcn)) {
val label = fqcn.split('.').last
syntheticNode("java", javaLabel(label), fqcn, node).accept(visitor)
syntheticNode("scala", label, fqcn, node).accept(visitor)
} else {
throw new java.lang.IllegalStateException(s"fqcn not found by @unidoc[$fqcn]")
}
}
else {
renderByClassName(node.label, node, visitor, printer)
}
}

def javaLabel(label: String): String =
label.replaceAll("\\[", "&lt;").replaceAll("\\]", "&gt;").replace('_', '?')

def syntheticNode(group: String, label: String, fqcn: String, node: DirectiveNode): DirectiveNode = {
val syntheticSource = new DirectiveNode.Source.Direct(fqcn)
val attributes = new org.pegdown.ast.DirectiveAttributes.AttributeMap()
new DirectiveNode(DirectiveNode.Format.Inline, group, null, null, attributes, null,
new DirectiveNode(DirectiveNode.Format.Inline, group + "doc", label, syntheticSource, node.attributes, fqcn,
new TextNode(label)
))
}

def renderByClassName(label: String, node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
val label = node.label.replaceAll("\\\\_", "_")
val labelWithoutGenericParameters = label.split("\\[")(0)
val labelWithJavaGenerics = javaLabel(label)
val matches = allClasses.filter(_.endsWith('.' + labelWithoutGenericParameters))
matches.size match {
case 0 =>
throw new java.lang.IllegalStateException(s"No matches found for $label")
case 1 if matches(0).contains("adsl") =>
throw new java.lang.IllegalStateException(s"Match for $label only found in one language: ${matches(0)}")
case 1 =>
syntheticNode("scala", label, matches(0), node).accept(visitor)
syntheticNode("java", labelWithJavaGenerics, matches(0), node).accept(visitor)
case 2 if matches.forall(_.contains("adsl")) =>
matches.foreach(m => {
if (!m.contains("javadsl"))
syntheticNode("scala", label, m, node).accept(visitor)
if (!m.contains("scaladsl"))
syntheticNode("java", labelWithJavaGenerics, m, node).accept(visitor)
})
case n =>
throw new java.lang.IllegalStateException(
s"$n matches found for @unidoc[$label], but not javadsl/scaladsl: ${matches.mkString(", ")}. " +
s"You may want to use the fully qualified class name as @unidoc[fqcn] instead of @unidoc[${label}]."
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error message is like:

[error] Caused by: java.lang.IllegalStateException: 2 matches found for @unidoc[ClusterReceptionist], but not javadsl/scaladsl: akka.cluster.client.ClusterReceptionist, akka.cluster.typed.internal.receptionist.ClusterReceptionist. You may want to use the fully qualified class name as @unidoc[fqcn] instead of @unidoc[ClusterReceptionist].
[error] at akka.ParadoxSupport$UnidocDirective.renderByClassName(ParadoxSupport.scala:78)

)
}
}
}
}
3 changes: 3 additions & 0 deletions project/plugins.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,6 @@ addSbtPlugin("com.lightbend.akka" % "sbt-paradox-akka" % "0.6")
addSbtPlugin("com.lightbend" % "sbt-whitesource" % "0.1.10")
addSbtPlugin("com.typesafe.sbt" % "sbt-git" % "0.9.3")
addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.9.0") // for advanced PR validation features

// used for @unidoc directive
libraryDependencies += "io.github.lukehutch" % "fast-classpath-scanner" % "2.12.3"