Permalink
Browse files

created pending tag for HornetQ 2.0.0.CR1

  • Loading branch information...
2 parents c99b307 + 02dee30 commit ced2b3ca4612210beca473acb302ffd750b20254 @andytaylor andytaylor committed Dec 5, 2009
View
@@ -864,6 +864,7 @@
<chmod file="${build.distro.bin.dir}/run.sh" perm="ugo+rx"/>
<chmod file="${build.distro.bin.dir}/stop.sh" perm="ugo+rx"/>
<chmod file="${build.distro.bin.dir}/build.sh" perm="ugo+rx"/>
+ <chmod file="${build.distro.config.dir}/jboss-as/build.sh" perm="ugo+rx"/>
<copy todir="${build.distro.bin.dir}">
<fileset dir="${native.bin.dir}">
<include name="*.so"/>
@@ -0,0 +1,13 @@
+@echo off
+
+set "OVERRIDE_ANT_HOME=..\..\tools\ant"
+
+if exist "..\..\src\bin\build.bat" (
+ rem running from TRUNK
+ call ..\..\src\bin\build.bat %*
+) else (
+ rem running from the distro
+ call ..\..\bin\build.bat %*
+)
+
+set "OVERRIDE_ANT_HOME="
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+OVERRIDE_ANT_HOME=../../tools/ant
+export OVERRIDE_ANT_HOME
+
+if [ -f "../../src/bin/build.sh" ]; then
+ # running from TRUNK
+ ../../src/bin/build.sh "$@"
+else
+ # running from the distro
+ ../../bin/build.sh "$@"
+fi
+
+
+
@@ -105,8 +105,8 @@
directory where you installed JBoss AS 5</para>
</listitem>
<listitem>
- <para>run <literal>ant</literal> in HornetQ's <literal>config/jboss-as</literal>
- directory</para>
+ <para>run <literal>./build.sh</literal> (or <literal>build.bat</literal> if you are on
+ Windows) in HornetQ's <literal>config/jboss-as</literal> directory</para>
</listitem>
</orderedlist>
<para>This will create 2 new profiles in <literal>$JBOSS_HOME/server</literal>:</para>
@@ -130,8 +130,8 @@
<note>
<para>HornetQ can be deployed on AS 4 but isn't recommended</para>
</note>
- <para>As in AS 4, it is not shipped by default with the application server and you need to create
- new AS 4 profiles to run AS 4 with HornetQ.</para>
+ <para>As in AS 4, it is not shipped by default with the application server and you need to
+ create new AS 4 profiles to run AS 4 with HornetQ.</para>
<para>To create AS 4 profiles:</para>
<orderedlist>
<listitem>
@@ -142,7 +142,8 @@
directory where you installed JBoss AS 4</para>
</listitem>
<listitem>
- <para>run <literal>ant as4</literal> in HornetQ's <literal>config/jboss-as</literal>
+ <para>run <literal><literal>./build.sh</literal> (or <literal>build.bat</literal> if you
+ are on Windows)</literal> in HornetQ's <literal>config/jboss-as</literal>
directory</para>
</listitem>
</orderedlist>
View
@@ -0,0 +1,13 @@
+@echo off
+
+set "OVERRIDE_ANT_HOME=..\..\tools\ant"
+
+if exist "..\..\src\bin\build.bat" (
+ rem running from TRUNK
+ call ..\..\src\bin\build.bat %*
+) else (
+ rem running from the distro
+ call ..\..\bin\build.bat %*
+)
+
+set "OVERRIDE_ANT_HOME="
View
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+OVERRIDE_ANT_HOME=../../tools/ant
+export OVERRIDE_ANT_HOME
+
+if [ -f "../../src/bin/build.sh" ]; then
+ # running from TRUNK
+ ../../src/bin/build.sh "$@"
+else
+ # running from the distro
+ ../../bin/build.sh "$@"
+fi
+
+
+
View
@@ -68,8 +68,8 @@
>hornetq-configuration.xml</literal>, configure the live server with
knowledge of its backup server. This is done by specifying a <literal
>backup-connector-ref</literal> element. This element references a
- connector, also specified on the live server which specifies how
- to connect to the backup server.</para>
+ connector, also specified on the live server which specifies how to connect
+ to the backup server.</para>
<para>Here's a snippet from live server's <literal
>hornetq-configuration.xml</literal> configured to connect to its backup
server:</para>
@@ -86,8 +86,8 @@
&lt;/connector>
&lt;/connectors></programlisting>
<para>Secondly, on the backup server, we flag the server as a backup and make
- sure it has an acceptor that the live server can connect to. We also make sure the shared-store paramater is
- set to false:</para>
+ sure it has an acceptor that the live server can connect to. We also make
+ sure the shared-store paramater is set to false:</para>
<programlisting>
&lt;backup>true&lt;/backup>
@@ -104,8 +104,8 @@
<para>For a backup server to function correctly it's also important that it has
the same set of bridges, predefined queues, cluster connections, broadcast
groups and discovery groups as defined on the live node. The easiest way to
- ensure this is to copy the entire server side configuration from live
- to backup and just make the changes as specified above. </para>
+ ensure this is to copy the entire server side configuration from live to
+ backup and just make the changes as specified above. </para>
</section>
<section>
<title>Synchronizing a Backup Node to a Live Node</title>
@@ -247,46 +247,52 @@
linkend="examples.non-transaction-failover"/>.</para>
<section id="ha.automatic.failover.noteonreplication">
<title>A Note on Server Replication</title>
- <para>HornetQ does not replicate full server state betwen live and backup servers.
- When the new session is automatically recreated on the backup it won't have
- any knowledge of messages already sent or acknowledged in that session. Any
- inflight sends or acknowledgements at the time of failover might also be
+ <para>HornetQ does not replicate full server state betwen live and backup servers.
+ When the new session is automatically recreated on the backup it won't have any
+ knowledge of messages already sent or acknowledged in that session. Any
+ in-flight sends or acknowledgements at the time of failover might also be
lost.</para>
<para>By replicating full server state, theoretically we could provide a 100%
transparent seamless failover, which would avoid any lost messages or
acknowledgements, however this comes at a great cost: replicating the full
- server state (including the queues, session, etc.). This would require replication of
- the entire server state machine; every operation on the live server would have
- to replicated on the replica server(s) in the exact same global order to ensure
- a consistent replica state. This is extremely hard to do in a performant and
- scalable way, especially when one considers that multiple threads are changing
- the live server state concurrently.</para>
- <para>Some solutions which provide full state machine replication use
+ server state (including the queues, session, etc.). This would require
+ replication of the entire server state machine; every operation on the live
+ server would have to replicated on the replica server(s) in the exact same
+ global order to ensure a consistent replica state. This is extremely hard to do
+ in a performant and scalable way, especially when one considers that multiple
+ threads are changing the live server state concurrently.</para>
+ <para>Some messaging systems which provide full state machine replication use
techniques such as <emphasis role="italic">virtual synchrony</emphasis>, but
this does not scale well and effectively serializes all operations to a single
thread, dramatically reducing concurrency.</para>
<para>Other techniques for multi-threaded active replication exist such as
replicating lock states or replicating thread scheduling but this is very hard
to achieve at a Java level.</para>
- <para>Consequently it xas decided it was not worth massively reducing performance and
- concurrency for the sake of 100% transparent failover. Even without 100%
+ <para>Consequently it xas decided it was not worth massively reducing performance
+ and concurrency for the sake of 100% transparent failover. Even without 100%
transparent failover, it is simple to guarantee <emphasis role="italic">once and
- only once</emphasis> delivery, even in the case of failure, by
- using a combination of duplicate detection and retrying of transactions. However
- this is not 100% transparent to the client code.</para>
+ only once</emphasis> delivery, even in the case of failure, by using a
+ combination of duplicate detection and retrying of transactions. However this is
+ not 100% transparent to the client code.</para>
</section>
<section id="ha.automatic.failover.blockingcalls">
<title>Handling Blocking Calls During Failover</title>
- <para>If the client code is in a blocking call to the server, waiting for
- a response to continue its execution, when failover occurs, the new session
- will not have any knowledge of the call that was in progress. This call might
- otherwise hang for ever, waiting for a response that will never come.</para>
- <para>To prevent this, HornetQ will unblock any blocking calls that were in
- progress at the time of failover by making them throw a <literal
+ <para>If the client code is in a blocking call to the server, waiting for a response
+ to continue its execution, when failover occurs, the new session will not have
+ any knowledge of the call that was in progress. This call might otherwise hang
+ for ever, waiting for a response that will never come.</para>
+ <para>To prevent this, HornetQ will unblock any blocking calls that were in progress
+ at the time of failover by making them throw a <literal
>javax.jms.JMSException</literal> (if using JMS), or a <literal
>HornetQException</literal> with error code <literal
>HornetQException.UNBLOCKED</literal>. It is up to the client code to catch
this exception and retry any operations if desired.</para>
+ <para>If the method being unblocked is a call to commit(), or prepare(), then the
+ transaction will be automatically rolled back and HornetQ will throw a <literal
+ >javax.jms.TransactionRolledBackException</literal> (if using JMS), or a
+ <literal>HornetQException</literal> with error code <literal
+ >HornetQException.TRANSACTION_ROLLED_BACK</literal> if using the core
+ API.</para>
</section>
<section id="ha.automatic.failover.transactions">
<title>Handling Failover With Transactions</title>
@@ -302,15 +308,15 @@
<para>It is up to the user to catch the exception, and perform any client side local
rollback code as necessary. The user can then just retry the transactional
operations again on the same session.</para>
- <para>HornetQ ships with a fully functioning example demonstrating how to do this, please
- see <xref linkend="examples.transaction-failover"/></para>
+ <para>HornetQ ships with a fully functioning example demonstrating how to do this,
+ please see <xref linkend="examples.transaction-failover"/></para>
<para>If failover occurs when a commit call is being executed, the server, as
previously described, will unblock the call to prevent a hang, since no response
- will come back. In this case it is not easy for the
- client to determine whether the transaction commit was actually processed on the
- live server before failure occurred.</para>
+ will come back. In this case it is not easy for the client to determine whether
+ the transaction commit was actually processed on the live server before failure
+ occurred.</para>
<para>To remedy this, the client can simply enable duplicate detection (<xref
- linkend="duplicate-detection"/>) in the transaction, and retry the
+ linkend="duplicate-detection"/>) in the transaction, and retry the
transaction operations again after the call is unblocked. If the transaction had
indeed been committed on the live server successfully before failover, then when
the transaction is retried, duplicate detection will ensure that any persistent
@@ -325,14 +331,12 @@
</section>
<section id="ha.automatic.failover.nontransactional">
<title>Handling Failover With Non Transactional Sessions</title>
- <para>If the session is non transactional, messages or
- acknowledgements can be lost in the event of failover.</para>
+ <para>If the session is non transactional, messages or acknowledgements can be lost
+ in the event of failover.</para>
<para>If you wish to provide <emphasis role="italic">once and only once</emphasis>
- delivery guarantees for non transacted sessions too, then make sure you send
- messages blocking, enabled duplicate detection, and catch unblock exceptions as
- described in <xref linkend="ha.automatic.failover.blockingcalls"/></para>
- <para>However bear in mind that sending messages and acknowledgements blocking will
- incur performance penalties due to the network round trip involved.</para>
+ delivery guarantees for non transacted sessions too, enabled duplicate
+ detection, and catch unblock exceptions as described in <xref
+ linkend="ha.automatic.failover.blockingcalls"/></para>
</section>
</section>
<section>
@@ -365,8 +369,8 @@
server.</para>
<para>For a working example of application-level failover, please see <xref
linkend="application-level-failover"/>.</para>
- <para>If you are using the core API, then the procedure is very similar: you would set
- a <literal>FailureListener</literal> on the core <literal>ClientSession</literal>
+ <para>If you are using the core API, then the procedure is very similar: you would set a
+ <literal>FailureListener</literal> on the core <literal>ClientSession</literal>
instances.</para>
</section>
</section>
@@ -101,11 +101,7 @@
</section>
<section>
<title>System properties</title>
- <para>HornetQ also takes a couple of Java system properties on the command line for
- configuring logging properties</para>
- <para>HornetQ uses JDK logging to minimise dependencies on other logging systems. JDK
- logging can then be configured to delegate to some other framework, e.g. log4j if that's
- what you prefer.</para>
+ <para>HornetQ can take a system property on the command line for configuring logging.</para>
<para>For more information on configuring logging, please see <xref linkend="logging"
/>.</para>
</section>
@@ -52,7 +52,6 @@
<address-settings>
<!--default for catch all-->
<address-setting match="#">
- <clustered>false</clustered>
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
@@ -23,6 +23,7 @@
import javax.naming.InitialContext;
import org.hornetq.common.example.HornetQExample;
+import org.hornetq.core.message.impl.MessageImpl;
/**
* A simple example that demonstrates failover of the JMS connection from one node to another
@@ -71,7 +72,7 @@ public boolean runExample() throws Exception
for (int i = 0; i < numMessages; i++)
{
TextMessage message = session.createTextMessage("This is text message " + i);
- producer.send(message);
+
System.out.println("Sent message: " + message.getText());
}
@@ -94,7 +95,7 @@ public boolean runExample() throws Exception
// Step 10. Crash server #1, the live server, and wait a little while to make sure
// it has really crashed
killServer(1);
- Thread.sleep(2000);
+ Thread.sleep(5000);
// Step 11. Acknowledging the 2nd half of the sent messages will fail as failover to the
// backup server has occured
Oops, something went wrong.

0 comments on commit ced2b3c

Please sign in to comment.