Skip to content

Latest commit

 

History

History
1294 lines (1104 loc) · 55.4 KB

README.adoc

File metadata and controls

1294 lines (1104 loc) · 55.4 KB

ejb-timer: Example of Jakarta Enterprise Bean Timer Service - @Schedule and @Timeout

The ejb-timer quickstart demonstrates how to use the Jakarta Enterprise Bean timer service @Schedule and @Timeout annotations with WildFly.

What is it?

The ejb-timer quickstart demonstrates how to use the Jakarta Enterprise Bean timer service in WildFly Application Server. This example creates a timer service that uses the @Schedule and @Timeout annotations.

The following Jakarta Enterprise Bean Timer services are demonstrated:

  • @Schedule: Uses this annotation to mark a method to be executed according to the calendar schedule specified in the attributes of the annotation. This example schedules a message to be printed to the server console every 6 seconds.

  • @Timeout: Uses this annotation to mark a method to execute when a programmatic timer goes off. This example sets the timer to go off every 3 seconds, at which point the method prints a message to the server console.

System Requirements

The application this project produces is designed to be run on WildFly Application Server 31 or later.

All you need to build this project is Java 11.0 (Java SDK 11) or later and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.

Use of the WILDFLY_HOME and QUICKSTART_HOME Variables

In the following instructions, replace WILDFLY_HOME with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.

When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.

Start the WildFly Standalone Server

  1. Open a terminal and navigate to the root of the WildFly directory.

  2. Start the WildFly server with the default profile by typing the following command.

    $ WILDFLY_HOME/bin/standalone.sh 
    Note
    For Windows, use the WILDFLY_HOME\bin\standalone.bat script.

Build and Deploy the Quickstart

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type the following command to build the quickstart.

    $ mvn clean package
  4. Type the following command to deploy the quickstart.

    $ mvn wildfly:deploy

This deploys the ejb-timer/target/ejb-timer.war to the running instance of the server.

You should see a message in the server log indicating that the archive deployed successfully.

Access the Application

This application only prints messages to stdout. Each timeout callback logs the class name of the @Singleton bean that created the timer, an identifier of the timer, and the timestamp of the callback. In our example application, the ScheduleExample bean creates a persistent timer, while the TimeoutExample creates a non-persistent (i.e. transient) timer. To see it working, check the server log. You should see similar output:

INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:24.896811Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:27.002334Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:30.004340Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:30.014526Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:33.001997Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:36.001444Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:36.004266Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:39.001746Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:42.002048Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:42.010535Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:45.000920Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:48.001840Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:48.010532Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:51.002591Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:54.001734Z

Existing threads in the thread pool handle the invocations. They are rotated and the name of the thread that handles the invocation is printed within the parenthesis (EJB Default - #).

To demonstrate the behavioral difference between persistent and non-persistent timers, stop the server via "CRTL-C" and restart it. Upon restart, you will see similar periodic timeout events, but while the persistent timer identifier remains the same, since persistent timers were restore upon restart; the non-persistent timer now has a different identifier, since the transient timers are lost when the server shutdown, and was recreated on startup.

INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:36.013024Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:39.001383Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:42.002232Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:42.011380Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:45.001951Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:48.002369Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:48.008104Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:51.002364Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:54.002230Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:54.009333Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:57.001874Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:19:00.002287Z
INFO  [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:19:00.010617Z
INFO  [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:19:03.002128Z
INFO  [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:19:06.002358Z

Run the Integration Tests

This quickstart includes integration tests, which are located under the src/test/ directory. The integration tests verify that the quickstart runs correctly when deployed on the server.

Follow these steps to run the integration tests.

  1. Make sure WildFly server is started.

  2. Make sure the quickstart is deployed.

  3. Type the following command to run the verify goal with the integration-testing profile activated.

    $ mvn verify -Pintegration-testing 

Undeploy the Quickstart

When you are finished testing the quickstart, follow these steps to undeploy the archive.

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type this command to undeploy the archive:

    $ mvn wildfly:undeploy

Building and running the quickstart application with provisioned WildFly server

Instead of using a standard WildFly server distribution, you can alternatively provision a WildFly server to deploy and run the quickstart, by activating the Maven profile named provisioned-server when building the quickstart:

$ mvn clean package -Pprovisioned-server

The provisioned WildFly server, with the quickstart deployed, can then be found in the target/server directory, and its usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>provisioned-server</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <feature-packs>
                                <feature-pack>
                                    <location>org.wildfly:wildfly-galleon-pack:${version.server}</location>
                                </feature-pack>
                            </feature-packs>
                            <layers>...</layers>
                            <!-- deploys the quickstart on root web context -->
                            <name>ROOT.war</name>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>
Note

Since the plugin configuration above deploys quickstart on root web context of the provisioned server, the URL to access the application should not have the /ejb-timer path segment after HOST:PORT.

Run the Integration Tests with a provisioned server

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with a provisioned server.

Follow these steps to run the integration tests.

  1. Make sure the server is provisioned.

    $ mvn clean package -Pprovisioned-server
  2. Start the WildFly provisioned server, this time using the WildFly Maven Plugin, which is recommended for testing due to simpler automation. The path to the provisioned server should be specified using the jbossHome system property.

    $ mvn wildfly:start -DjbossHome=target/server 
  3. Type the following command to run the verify goal with the integration-testing profile activated, and specifying the quickstart’s URL using the server.host system property, which for a provisioned server by default is http://localhost:8080.

    $ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080 
  4. Shutdown the WildFly provisioned server, this time using the WildFly Maven Plugin too.

    $ mvn wildfly:shutdown

Building and running the quickstart application with OpenShift

Build the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

On OpenShift, the S2I build with Apache Maven uses an openshift Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <feature-packs>
                                <feature-pack>
                                    <location>org.wildfly:wildfly-galleon-pack:${version.server}</location>
                                </feature-pack>
                                <feature-pack>
                                    <location>org.wildfly.cloud:wildfly-cloud-galleon-pack:${version.pack.cloud}</location>
                                </feature-pack>
                            </feature-packs>
                            <layers>...</layers>
                            <name>ROOT.war</name>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud feature pack which enables a configuration tuned for OpenShift environment.

Getting Started with WildFly for OpenShift and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.

Prerequisites

  • You must be logged in OpenShift and have an oc client to connect to OpenShift

  • Helm must be installed to deploy the backend on OpenShift.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Deploy the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

Log in to your OpenShift instance using the oc login command. The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following command:

$ helm install ejb-timer -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s
NAME: ejb-timer
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

oc get deployment ejb-timer

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: main
  contextDir: ejb-timer
deploy:
  replicas: 1

This will create a new deployment on OpenShift and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

Get the URL of the route to the deployment.

$ oc get route ejb-timer -o jsonpath="{.spec.host}"

Access the application in your web browser using the displayed URL.

Note

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /ejb-timer path segment after HOST:PORT.

Run the Integration Tests with OpenShift

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route ejb-timer --template='{{ .spec.host }}') 
Note

The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from.

Undeploy the WildFly Source-to-Image (S2I) Quickstart from OpenShift with Helm Charts

$ helm uninstall ejb-timer

Using Timer Service within a cluster

To demonstrate distributed TimerService behavior, a cluster of at least two application server instances must be started. Begin by making a copy of the entire WildFly directory to be used as second cluster member. Note that the example can be run on a single node as well, but without observation of the singleton properties.

The default configuration of the HA profiles is pre-configured for a fully distributed persistent timers, as well as passivation support for non-persistent timers.

Start the two WildFly servers with the same HA profile using the following commands. Note that a socket binding port offset and a unique node name must be passed to the second server if the servers are binding to the same host.

$ WILDFLY_HOME_1/bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=node1
$ WILDFLY_HOME_2/bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=100
Note
For Windows, use the WILDFLY_HOME_1\bin\standalone.bat and WILDFLY_HOME_2\bin\standalone.bat scripts.

This example is not limited to two servers. Additional servers can be started by specifying a unique port offset for each one.

Next, use the following commands to deploy the already built demo application archive to each server. Note that since the default socket binding port is 9990 and the second server has ports offset by 100, the sum, 10090 must be passed as an argument to the deploy maven goal.

mvn wildfly:deploy
mvn wildfly:deploy -Dwildfly.port=10090

Once deployed, you should begin to see log messages for our timer events. However, while timeout events for the non-persistent timer created by the TimoutExample bean are triggered on both nodes, timeout events for the persistent timer created by the ScheduleExample bean are only triggered on one node.

node1:

INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:36.003154Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:39.003098Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:42.002884Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:45.003209Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:48.001284Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:51.001656Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:54.001396Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:57.001848Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:58:00.001673Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:58:03.001794Z

node2:

INFO  [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:36.003800Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:36.003799Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:39.003279Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:42.003483Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:42.003699Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:45.003339Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:48.001545Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:48.001544Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:51.001657Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:54.001710Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:54.001710Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:57.001717Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:58:00.001091Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:58:00.001547Z
INFO  [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:58:03.001514Z

If you then shutdown the node on which the ScheduleExample timeouts appear (in our case, node2), the other node (in our case, node1) will promptly begin receiving timeouts for that same persistent timer (as indicated by the same identifier).

Restarting the node that we previously shutdown (in our case, node2), using the same command as listed above), you should observe that timeouts for the ScheduleExample timer will resume on the original node (in our case, node2), and now the other node (in our case, node1) no longer receives those timeout events. In fact, if you collate the timestamps for the ScheduleExample bean across each server log carefully, you should find that no events were skipped, and no duplicate events were received.