Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

created pending tag for HornetQ 2.0.0.CR1

  • Loading branch information...
commit ced2b3ca4612210beca473acb302ffd750b20254 2 parents c99b307 + 02dee30
@andytaylor andytaylor authored
View
1  build-hornetq.xml
@@ -864,6 +864,7 @@
<chmod file="${build.distro.bin.dir}/run.sh" perm="ugo+rx"/>
<chmod file="${build.distro.bin.dir}/stop.sh" perm="ugo+rx"/>
<chmod file="${build.distro.bin.dir}/build.sh" perm="ugo+rx"/>
+ <chmod file="${build.distro.config.dir}/jboss-as/build.sh" perm="ugo+rx"/>
<copy todir="${build.distro.bin.dir}">
<fileset dir="${native.bin.dir}">
<include name="*.so"/>
View
13 docs/quickstart-guide/build.bat
@@ -0,0 +1,13 @@
+@echo off
+
+set "OVERRIDE_ANT_HOME=..\..\tools\ant"
+
+if exist "..\..\src\bin\build.bat" (
+ rem running from TRUNK
+ call ..\..\src\bin\build.bat %*
+) else (
+ rem running from the distro
+ call ..\..\bin\build.bat %*
+)
+
+set "OVERRIDE_ANT_HOME="
View
15 docs/quickstart-guide/build.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+OVERRIDE_ANT_HOME=../../tools/ant
+export OVERRIDE_ANT_HOME
+
+if [ -f "../../src/bin/build.sh" ]; then
+ # running from TRUNK
+ ../../src/bin/build.sh "$@"
+else
+ # running from the distro
+ ../../bin/build.sh "$@"
+fi
+
+
+
View
11 docs/quickstart-guide/en/installation.xml
@@ -105,8 +105,8 @@
directory where you installed JBoss AS 5</para>
</listitem>
<listitem>
- <para>run <literal>ant</literal> in HornetQ's <literal>config/jboss-as</literal>
- directory</para>
+ <para>run <literal>./build.sh</literal> (or <literal>build.bat</literal> if you are on
+ Windows) in HornetQ's <literal>config/jboss-as</literal> directory</para>
</listitem>
</orderedlist>
<para>This will create 2 new profiles in <literal>$JBOSS_HOME/server</literal>:</para>
@@ -130,8 +130,8 @@
<note>
<para>HornetQ can be deployed on AS 4 but isn't recommended</para>
</note>
- <para>As in AS 4, it is not shipped by default with the application server and you need to create
- new AS 4 profiles to run AS 4 with HornetQ.</para>
+ <para>As in AS 4, it is not shipped by default with the application server and you need to
+ create new AS 4 profiles to run AS 4 with HornetQ.</para>
<para>To create AS 4 profiles:</para>
<orderedlist>
<listitem>
@@ -142,7 +142,8 @@
directory where you installed JBoss AS 4</para>
</listitem>
<listitem>
- <para>run <literal>ant as4</literal> in HornetQ's <literal>config/jboss-as</literal>
+ <para>run <literal><literal>./build.sh</literal> (or <literal>build.bat</literal> if you
+ are on Windows)</literal> in HornetQ's <literal>config/jboss-as</literal>
directory</para>
</listitem>
</orderedlist>
View
13 docs/user-manual/build.bat
@@ -0,0 +1,13 @@
+@echo off
+
+set "OVERRIDE_ANT_HOME=..\..\tools\ant"
+
+if exist "..\..\src\bin\build.bat" (
+ rem running from TRUNK
+ call ..\..\src\bin\build.bat %*
+) else (
+ rem running from the distro
+ call ..\..\bin\build.bat %*
+)
+
+set "OVERRIDE_ANT_HOME="
View
15 docs/user-manual/build.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+OVERRIDE_ANT_HOME=../../tools/ant
+export OVERRIDE_ANT_HOME
+
+if [ -f "../../src/bin/build.sh" ]; then
+ # running from TRUNK
+ ../../src/bin/build.sh "$@"
+else
+ # running from the distro
+ ../../bin/build.sh "$@"
+fi
+
+
+
View
90 docs/user-manual/en/ha.xml
@@ -68,8 +68,8 @@
>hornetq-configuration.xml</literal>, configure the live server with
knowledge of its backup server. This is done by specifying a <literal
>backup-connector-ref</literal> element. This element references a
- connector, also specified on the live server which specifies how
- to connect to the backup server.</para>
+ connector, also specified on the live server which specifies how to connect
+ to the backup server.</para>
<para>Here's a snippet from live server's <literal
>hornetq-configuration.xml</literal> configured to connect to its backup
server:</para>
@@ -86,8 +86,8 @@
&lt;/connector>
&lt;/connectors></programlisting>
<para>Secondly, on the backup server, we flag the server as a backup and make
- sure it has an acceptor that the live server can connect to. We also make sure the shared-store paramater is
- set to false:</para>
+ sure it has an acceptor that the live server can connect to. We also make
+ sure the shared-store paramater is set to false:</para>
<programlisting>
&lt;backup>true&lt;/backup>
@@ -104,8 +104,8 @@
<para>For a backup server to function correctly it's also important that it has
the same set of bridges, predefined queues, cluster connections, broadcast
groups and discovery groups as defined on the live node. The easiest way to
- ensure this is to copy the entire server side configuration from live
- to backup and just make the changes as specified above. </para>
+ ensure this is to copy the entire server side configuration from live to
+ backup and just make the changes as specified above. </para>
</section>
<section>
<title>Synchronizing a Backup Node to a Live Node</title>
@@ -247,46 +247,52 @@
linkend="examples.non-transaction-failover"/>.</para>
<section id="ha.automatic.failover.noteonreplication">
<title>A Note on Server Replication</title>
- <para>HornetQ does not replicate full server state betwen live and backup servers.
- When the new session is automatically recreated on the backup it won't have
- any knowledge of messages already sent or acknowledged in that session. Any
- inflight sends or acknowledgements at the time of failover might also be
+ <para>HornetQ does not replicate full server state betwen live and backup servers.
+ When the new session is automatically recreated on the backup it won't have any
+ knowledge of messages already sent or acknowledged in that session. Any
+ in-flight sends or acknowledgements at the time of failover might also be
lost.</para>
<para>By replicating full server state, theoretically we could provide a 100%
transparent seamless failover, which would avoid any lost messages or
acknowledgements, however this comes at a great cost: replicating the full
- server state (including the queues, session, etc.). This would require replication of
- the entire server state machine; every operation on the live server would have
- to replicated on the replica server(s) in the exact same global order to ensure
- a consistent replica state. This is extremely hard to do in a performant and
- scalable way, especially when one considers that multiple threads are changing
- the live server state concurrently.</para>
- <para>Some solutions which provide full state machine replication use
+ server state (including the queues, session, etc.). This would require
+ replication of the entire server state machine; every operation on the live
+ server would have to replicated on the replica server(s) in the exact same
+ global order to ensure a consistent replica state. This is extremely hard to do
+ in a performant and scalable way, especially when one considers that multiple
+ threads are changing the live server state concurrently.</para>
+ <para>Some messaging systems which provide full state machine replication use
techniques such as <emphasis role="italic">virtual synchrony</emphasis>, but
this does not scale well and effectively serializes all operations to a single
thread, dramatically reducing concurrency.</para>
<para>Other techniques for multi-threaded active replication exist such as
replicating lock states or replicating thread scheduling but this is very hard
to achieve at a Java level.</para>
- <para>Consequently it xas decided it was not worth massively reducing performance and
- concurrency for the sake of 100% transparent failover. Even without 100%
+ <para>Consequently it xas decided it was not worth massively reducing performance
+ and concurrency for the sake of 100% transparent failover. Even without 100%
transparent failover, it is simple to guarantee <emphasis role="italic">once and
- only once</emphasis> delivery, even in the case of failure, by
- using a combination of duplicate detection and retrying of transactions. However
- this is not 100% transparent to the client code.</para>
+ only once</emphasis> delivery, even in the case of failure, by using a
+ combination of duplicate detection and retrying of transactions. However this is
+ not 100% transparent to the client code.</para>
</section>
<section id="ha.automatic.failover.blockingcalls">
<title>Handling Blocking Calls During Failover</title>
- <para>If the client code is in a blocking call to the server, waiting for
- a response to continue its execution, when failover occurs, the new session
- will not have any knowledge of the call that was in progress. This call might
- otherwise hang for ever, waiting for a response that will never come.</para>
- <para>To prevent this, HornetQ will unblock any blocking calls that were in
- progress at the time of failover by making them throw a <literal
+ <para>If the client code is in a blocking call to the server, waiting for a response
+ to continue its execution, when failover occurs, the new session will not have
+ any knowledge of the call that was in progress. This call might otherwise hang
+ for ever, waiting for a response that will never come.</para>
+ <para>To prevent this, HornetQ will unblock any blocking calls that were in progress
+ at the time of failover by making them throw a <literal
>javax.jms.JMSException</literal> (if using JMS), or a <literal
>HornetQException</literal> with error code <literal
>HornetQException.UNBLOCKED</literal>. It is up to the client code to catch
this exception and retry any operations if desired.</para>
+ <para>If the method being unblocked is a call to commit(), or prepare(), then the
+ transaction will be automatically rolled back and HornetQ will throw a <literal
+ >javax.jms.TransactionRolledBackException</literal> (if using JMS), or a
+ <literal>HornetQException</literal> with error code <literal
+ >HornetQException.TRANSACTION_ROLLED_BACK</literal> if using the core
+ API.</para>
</section>
<section id="ha.automatic.failover.transactions">
<title>Handling Failover With Transactions</title>
@@ -302,15 +308,15 @@
<para>It is up to the user to catch the exception, and perform any client side local
rollback code as necessary. The user can then just retry the transactional
operations again on the same session.</para>
- <para>HornetQ ships with a fully functioning example demonstrating how to do this, please
- see <xref linkend="examples.transaction-failover"/></para>
+ <para>HornetQ ships with a fully functioning example demonstrating how to do this,
+ please see <xref linkend="examples.transaction-failover"/></para>
<para>If failover occurs when a commit call is being executed, the server, as
previously described, will unblock the call to prevent a hang, since no response
- will come back. In this case it is not easy for the
- client to determine whether the transaction commit was actually processed on the
- live server before failure occurred.</para>
+ will come back. In this case it is not easy for the client to determine whether
+ the transaction commit was actually processed on the live server before failure
+ occurred.</para>
<para>To remedy this, the client can simply enable duplicate detection (<xref
- linkend="duplicate-detection"/>) in the transaction, and retry the
+ linkend="duplicate-detection"/>) in the transaction, and retry the
transaction operations again after the call is unblocked. If the transaction had
indeed been committed on the live server successfully before failover, then when
the transaction is retried, duplicate detection will ensure that any persistent
@@ -325,14 +331,12 @@
</section>
<section id="ha.automatic.failover.nontransactional">
<title>Handling Failover With Non Transactional Sessions</title>
- <para>If the session is non transactional, messages or
- acknowledgements can be lost in the event of failover.</para>
+ <para>If the session is non transactional, messages or acknowledgements can be lost
+ in the event of failover.</para>
<para>If you wish to provide <emphasis role="italic">once and only once</emphasis>
- delivery guarantees for non transacted sessions too, then make sure you send
- messages blocking, enabled duplicate detection, and catch unblock exceptions as
- described in <xref linkend="ha.automatic.failover.blockingcalls"/></para>
- <para>However bear in mind that sending messages and acknowledgements blocking will
- incur performance penalties due to the network round trip involved.</para>
+ delivery guarantees for non transacted sessions too, enabled duplicate
+ detection, and catch unblock exceptions as described in <xref
+ linkend="ha.automatic.failover.blockingcalls"/></para>
</section>
</section>
<section>
@@ -365,8 +369,8 @@
server.</para>
<para>For a working example of application-level failover, please see <xref
linkend="application-level-failover"/>.</para>
- <para>If you are using the core API, then the procedure is very similar: you would set
- a <literal>FailureListener</literal> on the core <literal>ClientSession</literal>
+ <para>If you are using the core API, then the procedure is very similar: you would set a
+ <literal>FailureListener</literal> on the core <literal>ClientSession</literal>
instances.</para>
</section>
</section>
View
6 docs/user-manual/en/using-server.xml
@@ -101,11 +101,7 @@
</section>
<section>
<title>System properties</title>
- <para>HornetQ also takes a couple of Java system properties on the command line for
- configuring logging properties</para>
- <para>HornetQ uses JDK logging to minimise dependencies on other logging systems. JDK
- logging can then be configured to delegate to some other framework, e.g. log4j if that's
- what you prefer.</para>
+ <para>HornetQ can take a system property on the command line for configuring logging.</para>
<para>For more information on configuring logging, please see <xref linkend="logging"
/>.</para>
</section>
View
1  examples/javaee/mdb-bmt/server/hornetq-configuration.xml
@@ -52,7 +52,6 @@
<address-settings>
<!--default for catch all-->
<address-setting match="#">
- <clustered>false</clustered>
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
View
5 examples/jms/non-transaction-failover/src/org/hornetq/jms/example/NonTransactionFailoverExample.java
@@ -23,6 +23,7 @@
import javax.naming.InitialContext;
import org.hornetq.common.example.HornetQExample;
+import org.hornetq.core.message.impl.MessageImpl;
/**
* A simple example that demonstrates failover of the JMS connection from one node to another
@@ -71,7 +72,7 @@ public boolean runExample() throws Exception
for (int i = 0; i < numMessages; i++)
{
TextMessage message = session.createTextMessage("This is text message " + i);
- producer.send(message);
+
System.out.println("Sent message: " + message.getText());
}
@@ -94,7 +95,7 @@ public boolean runExample() throws Exception
// Step 10. Crash server #1, the live server, and wait a little while to make sure
// it has really crashed
killServer(1);
- Thread.sleep(2000);
+ Thread.sleep(5000);
// Step 11. Acknowledging the 2nd half of the sent messages will fail as failover to the
// backup server has occured
View
39 examples/jms/transaction-failover/src/org/hornetq/jms/example/TransactionFailoverExample.java
@@ -14,6 +14,7 @@
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
+import javax.jms.JMSException;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
@@ -23,6 +24,7 @@
import javax.naming.InitialContext;
import org.hornetq.common.example.HornetQExample;
+import org.hornetq.core.message.impl.MessageImpl;
/**
* A simple example that demonstrates failover of the JMS connection from one node to another
@@ -59,7 +61,7 @@ public boolean runExample() throws Exception
// Step 4. We create a *transacted* JMS Session
Session session = connection.createSession(true, 0);
-
+
// Step 5. We start the connection to ensure delivery occurs
connection.start();
@@ -76,25 +78,37 @@ public boolean runExample() throws Exception
try
{
session.commit();
- } catch (TransactionRolledBackException e)
+ }
+ catch (TransactionRolledBackException e)
{
System.err.println("transaction has been rolled back: " + e.getMessage());
}
// Step 10. We resend all the messages
sendMessages(session, producer, numMessages, false);
- // Step 11. We commit the session succesfully: the messages will be all delivered to the activated backup server
+
+ // Step 11. We commit the session succesfully: the messages will be all delivered to the activated backup
+ // server
session.commit();
-
// Step 12. We are now transparently reconnected to server #0, the backup server.
// We consume the messages sent before the crash of the live server and commit the session.
for (int i = 0; i < numMessages; i++)
{
TextMessage message0 = (TextMessage)consumer.receive(5000);
+
+ if (message0 == null)
+ {
+ System.err.println("Example failed - message wasn't received");
+
+ return false;
+ }
+
System.out.println("Got message: " + message0.getText());
}
+
session.commit();
+
System.out.println("Other message on the server? " + consumer.receive(5000));
return true;
@@ -121,19 +135,36 @@ private void sendMessages(Session session, MessageProducer producer, int numMess
for (int i = 0; i < numMessages / 2; i++)
{
TextMessage message = session.createTextMessage("This is text message " + i);
+
+ //We set the duplicate detection header - so the server will ignore the same message
+ //if sent again after failover
+
+ message.setStringProperty(MessageImpl.HDR_DUPLICATE_DETECTION_ID.toString(), "uniqueid" + i);
+
producer.send(message);
+
System.out.println("Sent message: " + message.getText());
}
+
if (killServer)
{
killServer(1);
+
Thread.sleep(2000);
}
+
// We send the remaining half of messages
for (int i = numMessages / 2; i < numMessages; i++)
{
TextMessage message = session.createTextMessage("This is text message " + i);
+
+ //We set the duplicate detection header - so the server will ignore the same message
+ //if sent again after failover
+
+ message.setStringProperty(MessageImpl.HDR_DUPLICATE_DETECTION_ID.toString(), "uniqueid" + i);
+
producer.send(message);
+
System.out.println("Sent message: " + message.getText());
}
}
View
13 src/config/jboss-as/build.bat
@@ -0,0 +1,13 @@
+@echo off
+
+set "OVERRIDE_ANT_HOME=..\..\tools\ant"
+
+if exist "..\..\src\bin\build.bat" (
+ rem running from TRUNK
+ call ..\..\src\bin\build.bat %*
+) else (
+ rem running from the distro
+ call ..\..\bin\build.bat %*
+)
+
+set "OVERRIDE_ANT_HOME="
View
15 src/config/jboss-as/build.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+OVERRIDE_ANT_HOME=../../tools/ant
+export OVERRIDE_ANT_HOME
+
+if [ -f "../../src/bin/build.sh" ]; then
+ # running from TRUNK
+ ../../src/bin/build.sh "$@"
+else
+ # running from the distro
+ ../../bin/build.sh "$@"
+fi
+
+
+
View
271 src/main/org/hornetq/core/client/impl/ClientSessionImpl.java
@@ -178,6 +178,8 @@
private volatile boolean workDone;
private final String groupID;
+
+ private volatile boolean inClose;
// Constructors ----------------------------------------------------------------------------
@@ -454,6 +456,14 @@ public XAResource getXAResource()
{
return this;
}
+
+ private void rollbackOnFailover() throws HornetQException
+ {
+ rollback(false);
+
+ throw new HornetQException(TRANSACTION_ROLLED_BACK,
+ "The transaction was rolled back on failover to a backup server");
+ }
public void commit() throws HornetQException
{
@@ -461,15 +471,29 @@ public void commit() throws HornetQException
if (rollbackOnly)
{
- rollback(false);
-
- throw new HornetQException(TRANSACTION_ROLLED_BACK,
- "The transaction was rolled back on failover to a backup server");
+ rollbackOnFailover();
}
flushAcks();
- channel.sendBlocking(new PacketImpl(PacketImpl.SESS_COMMIT));
+ try
+ {
+ channel.sendBlocking(new PacketImpl(PacketImpl.SESS_COMMIT));
+ }
+ catch (HornetQException e)
+ {
+ if (e.getCode() == HornetQException.UNBLOCKED)
+ {
+ //The call to commit was unlocked on failover, we therefore rollback the tx,
+ //and throw a transaction rolled back exception instead
+
+ rollbackOnFailover();
+ }
+ else
+ {
+ throw e;
+ }
+ }
workDone = false;
}
@@ -732,8 +756,7 @@ public void handleReceiveContinuation(final long consumerID, final SessionReceiv
public void close() throws HornetQException
{
if (closed)
- {
- log.info("Already closed so not closing");
+ {
return;
}
@@ -743,11 +766,16 @@ public void close() throws HornetQException
closeChildren();
+ inClose = true;
+
channel.sendBlocking(new SessionCloseMessage());
}
- catch (Throwable ignore)
+ catch (Throwable e)
{
// Session close should always return without exception
+
+ //Note - we only log at trace
+ log.trace("Failed to close session", e);
}
doCleanup();
@@ -812,114 +840,120 @@ public synchronized void handleFailover(final RemotingConnection backupConnectio
else
{
// The session wasn't found on the server - probably we're failing over onto a backup server where the
- // session
- // won't exist or the target server has been restarted - in this case the session will need to be recreated,
+ // session won't exist or the target server has been restarted - in this case the session will need to be recreated,
// and we'll need to recreate any consumers
-
- Packet createRequest = new CreateSessionMessage(name,
- channel.getID(),
- version,
- username,
- password,
- minLargeMessageSize,
- xa,
- autoCommitSends,
- autoCommitAcks,
- preAcknowledge,
- confirmationWindowSize);
- boolean retry = false;
- do
+
+ // It could also be that the server hasn't been restarted, but the session is currently executing close, and that
+ // has already been executed on the server, that's why we can't find the session- in this case we *don't* want
+ // to recreate the session, we just want to unblock the blocking call
+ if (!inClose)
{
- try
- {
- channel1.sendBlocking(createRequest);
- retry = false;
- }
- catch (HornetQException e)
+ Packet createRequest = new CreateSessionMessage(name,
+ channel.getID(),
+ version,
+ username,
+ password,
+ minLargeMessageSize,
+ xa,
+ autoCommitSends,
+ autoCommitAcks,
+ preAcknowledge,
+ confirmationWindowSize);
+ boolean retry = false;
+ do
{
- // the session was created while its server was starting, retry it:
- if (e.getCode() == HornetQException.SESSION_CREATION_REJECTED)
+ try
{
- log.warn("Server is starting, retry to create the session " + name);
- retry = true;
- // sleep a little bit to avoid spinning too much
- Thread.sleep(10);
+ channel1.sendBlocking(createRequest);
+ retry = false;
}
- else
+ catch (HornetQException e)
{
- throw e;
+ // the session was created while its server was starting, retry it:
+ if (e.getCode() == HornetQException.SESSION_CREATION_REJECTED)
+ {
+ log.warn("Server is starting, retry to create the session " + name);
+ retry = true;
+ // sleep a little bit to avoid spinning too much
+ Thread.sleep(10);
+ }
+ else
+ {
+ throw e;
+ }
}
}
- }
- while (retry);
-
- channel.clearCommands();
-
- for (Map.Entry<Long, ClientConsumerInternal> entry : consumers.entrySet())
- {
- SessionCreateConsumerMessage createConsumerRequest = new SessionCreateConsumerMessage(entry.getKey(),
- entry.getValue()
- .getQueueName(),
- entry.getValue()
- .getFilterString(),
- entry.getValue()
- .isBrowseOnly(),
- false);
-
- createConsumerRequest.setChannelID(channel.getID());
-
- Connection conn = channel.getConnection().getTransportConnection();
-
- HornetQBuffer buffer = createConsumerRequest.encode(channel.getConnection());
-
- conn.write(buffer, false);
-
- int clientWindowSize = entry.getValue().getClientWindowSize();
-
- if (clientWindowSize != 0)
+ while (retry);
+
+ channel.clearCommands();
+
+ for (Map.Entry<Long, ClientConsumerInternal> entry : consumers.entrySet())
{
- SessionConsumerFlowCreditMessage packet = new SessionConsumerFlowCreditMessage(entry.getKey(),
- clientWindowSize);
-
- packet.setChannelID(channel.getID());
-
- buffer = packet.encode(channel.getConnection());
-
+ SessionCreateConsumerMessage createConsumerRequest = new SessionCreateConsumerMessage(entry.getKey(),
+ entry.getValue()
+ .getQueueName(),
+ entry.getValue()
+ .getFilterString(),
+ entry.getValue()
+ .isBrowseOnly(),
+ false);
+
+ createConsumerRequest.setChannelID(channel.getID());
+
+ Connection conn = channel.getConnection().getTransportConnection();
+
+ HornetQBuffer buffer = createConsumerRequest.encode(channel.getConnection());
+
conn.write(buffer, false);
+
+ int clientWindowSize = entry.getValue().getClientWindowSize();
+
+ if (clientWindowSize != 0)
+ {
+ SessionConsumerFlowCreditMessage packet = new SessionConsumerFlowCreditMessage(entry.getKey(),
+ clientWindowSize);
+
+ packet.setChannelID(channel.getID());
+
+ buffer = packet.encode(channel.getConnection());
+
+ conn.write(buffer, false);
+ }
}
- }
-
- if ((!autoCommitAcks || !autoCommitSends) && workDone)
- {
- // Session is transacted - set for rollback only
- // FIXME - there is a race condition here - a commit could sneak in before this is set
- rollbackOnly = true;
- }
-
- // Now start the session if it was already started
- if (started)
- {
- for (ClientConsumerInternal consumer : consumers.values())
+
+ if ((!autoCommitAcks || !autoCommitSends) && workDone)
{
- consumer.clearAtFailover();
- consumer.start();
+ // Session is transacted - set for rollback only
+ // FIXME - there is a race condition here - a commit could sneak in before this is set
+ rollbackOnly = true;
}
-
- Packet packet = new PacketImpl(PacketImpl.SESS_START);
-
- packet.setChannelID(channel.getID());
-
- Connection conn = channel.getConnection().getTransportConnection();
-
- HornetQBuffer buffer = packet.encode(channel.getConnection());
-
- conn.write(buffer, false);
+
+ // Now start the session if it was already started
+ if (started)
+ {
+ for (ClientConsumerInternal consumer : consumers.values())
+ {
+ consumer.clearAtFailover();
+ consumer.start();
+ }
+
+ Packet packet = new PacketImpl(PacketImpl.SESS_START);
+
+ packet.setChannelID(channel.getID());
+
+ Connection conn = channel.getConnection().getTransportConnection();
+
+ HornetQBuffer buffer = packet.encode(channel.getConnection());
+
+ conn.write(buffer, false);
+ }
+
+ resetCreditManager = true;
}
-
- resetCreditManager = true;
-
+
channel.returnBlocking();
}
+
}
catch (Throwable t)
{
@@ -1012,6 +1046,23 @@ public void commit(final Xid xid, final boolean onePhase) throws XAException
catch (HornetQException e)
{
log.warn(e.getMessage(), e);
+
+ if (e.getCode() == HornetQException.UNBLOCKED)
+ {
+ //Unblocked on failover
+
+ try
+ {
+ rollback(false);
+ }
+ catch (HornetQException e2)
+ {
+ throw new XAException(XAException.XAER_RMERR);
+ }
+
+ throw new XAException(XAException.XA_RBOTHER);
+ }
+
// This should never occur
throw new XAException(XAException.XAER_RMERR);
}
@@ -1122,7 +1173,7 @@ public boolean isSameRM(final XAResource xares) throws XAException
public int prepare(final Xid xid) throws XAException
{
checkXA();
-
+
if (rollbackOnly)
{
throw new XAException(XAException.XA_RBOTHER);
@@ -1148,6 +1199,24 @@ public int prepare(final Xid xid) throws XAException
}
catch (HornetQException e)
{
+ log.warn(e.getMessage(), e);
+
+ if (e.getCode() == HornetQException.UNBLOCKED)
+ {
+ //Unblocked on failover
+
+ try
+ {
+ rollback(false);
+ }
+ catch (HornetQException e2)
+ {
+ throw new XAException(XAException.XAER_RMERR);
+ }
+
+ throw new XAException(XAException.XA_RBOTHER);
+ }
+
// This should never occur
throw new XAException(XAException.XAER_RMERR);
}
View
8 tests/src/org/hornetq/tests/integration/cluster/failover/DelayInterceptor2.java
@@ -31,11 +31,15 @@
{
private static final Logger log = Logger.getLogger(DelayInterceptor2.class);
+ private volatile boolean loseResponse = true;
+
public boolean intercept(Packet packet, RemotingConnection connection) throws HornetQException
{
- if (packet.getType() == PacketImpl.NULL_RESPONSE)
+ if (packet.getType() == PacketImpl.NULL_RESPONSE && loseResponse)
{
- //Lose the response from the commit
+ //Lose the response from the commit - only lose the first one
+
+ loseResponse = false;
return false;
}
View
33 tests/src/org/hornetq/tests/integration/cluster/failover/FailoverTest.java
@@ -44,7 +44,6 @@
import org.hornetq.core.remoting.impl.invm.TransportConstants;
import org.hornetq.core.transaction.impl.XidImpl;
import org.hornetq.jms.client.HornetQTextMessage;
-import org.hornetq.tests.util.RandomUtil;
import org.hornetq.utils.SimpleString;
/**
@@ -1952,30 +1951,32 @@ public void run()
{
sf.addInterceptor(interceptor);
+ log.info("attempting commit");
session.commit();
}
catch (HornetQException e)
{
- if (e.getCode() == HornetQException.UNBLOCKED)
+ if (e.getCode() == HornetQException.TRANSACTION_ROLLED_BACK)
{
+ log.info("got transaction rolled back");
+
// Ok - now we retry the commit after removing the interceptor
sf.removeInterceptor(interceptor);
try
{
+ log.info("trying to commit again");
session.commit();
- fail("commit succeeded");
+ log.info("committed again ok");
+
+ failed = false;
}
catch (HornetQException e2)
{
- if (e2.getCode() == HornetQException.TRANSACTION_ROLLED_BACK)
- {
- // Ok
-
- failed = false;
- }
+
}
+
}
}
}
@@ -1990,6 +1991,8 @@ public void run()
Thread.sleep(500);
fail(session, latch);
+
+ log.info("connection has failed");
committer.join();
@@ -2104,7 +2107,7 @@ public void run()
}
catch (HornetQException e)
{
- if (e.getCode() == HornetQException.UNBLOCKED)
+ if (e.getCode() == HornetQException.TRANSACTION_ROLLED_BACK)
{
// Ok - now we retry the commit after removing the interceptor
@@ -2113,15 +2116,11 @@ public void run()
try
{
session.commit();
+
+ failed = false;
}
catch (HornetQException e2)
- {
- if (e2.getCode() == HornetQException.TRANSACTION_ROLLED_BACK)
- {
- // Ok
-
- failed = false;
- }
+ {
}
}
}
Please sign in to comment.
Something went wrong with that request. Please try again.