Quantcast
Channel: Apache Timeline
Viewing all articles
Browse latest Browse all 5648

JMS to JMS bridge reconnection dispatching not working in simple conditions

$
0
0
Hi, I'm testing ActiveMQ 5.10 trying to migrate from 5.6 in hopes to fix a
variety of dispatching issues I've been dealing with for a while. I'm
running the following topology to verify producer/consumer dispatching on
reconnection works as it does in 5.6. This is my topology:

producer ------> brokerA
/ \ |
| \ /
brokerB <----- consumers

Broker configuration (it's the same config in both brokerA and brokerB, they
point at each other):

<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd">

<bean
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"/>

<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="localhost-b2"
persistent="false"
useJmx="true">

<destinationPolicy>
<policyMap>
<policyEntries>

<policyEntry queue="test.set1..>">
<deadLetterStrategy>

<individualDeadLetterStrategy
queuePrefix="DLQ.set1." useQueueForQueueMessages="true" />
</deadLetterStrategy>
</policyEntry>
<policyEntry queue="test.set2..>">
<deadLetterStrategy>

<individualDeadLetterStrategy
queuePrefix="DLQ.set2." useQueueForQueueMessages="true" />
</deadLetterStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>

<managementContext>
<managementContext createConnector="true" connectorPort="1299"/>
</managementContext>

<networkConnectors>

<networkConnector name="failover-b2"

uri="static:failover:(tcp://ec2-50-17-148-126.compute-1.amazonaws.com:61618)?randomize=false橪;maxReconnectAttempts=-1"
duplex="true"
networkTTL="4"
alwaysSyncSend="false"
conduitSubscriptions="false"/>
</networkConnectors>

<persistenceAdapter>
<memoryPersistenceAdapter/>
</persistenceAdapter>

<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="4300 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb" name="store"/>
</storeUsage>
<tempUsage>
<tempUsage limit="10 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>

<transportConnectors>

<transportConnector name="openwire-in"
uri="tcp://0.0.0.0:61626"/>

<transportConnector name="openwire-out"
uri="tcp://0.0.0.0:61627"/>

<transportConnector name="openwire-networkConnector"
uri="tcp://0.0.0.0:61628"/>
</transportConnectors>

</broker>

</beans>

I'm using the swiss army project, I built it locally with no source code
changes. I run it with these parameters:

producer:
ant producer -Durl=tcp://ec2-54-234-168-125.compute-1.amazonaws.com:61626
-Dtopic=false -Dsub=my.test.dest -Dmax=10000 -DparallelThreads=1
-DsleepTime=10 -DmessageSize=400

consumer:
ant consumer -Durl=tcp://ec2-50-17-148-126.compute-1.amazonaws.com:61617
-Dtopic=false -Dsubje=my.test.dest -Dmax=500000 -DparallelThreads=2
-DsleepTime=10

Brokers are always started up 1st. I see connector started messages on both
brokers connectors and network connection.

Test Case 1:
If consumers are started up 1st and then producers everything works ok. I
can stop the producer and start it up again and everything is ok. If I stop
the consumer threads and bring them back up I see them consume one or two
messages per thread and then they don't get anything else.
On the brokers I only see the EOFException for starting/stopping of
consumers and producers I manually induce.

Test Case 2:
I start the producer thread and then the consumer threads. The producer
enqueues normally but the consumer threads only receive 0 or 1 message. I
try stopping the consumer process and start it again but it never gets
anything.

When I stop my topology I can always see the broker producers connect to is
never able to shutdown gracefully. I end up always killing it manually. I
see the following message appear in the log several times:

The connection to 'vm://localhost-b2#0' is taking a long time to shutdown.

I always end up issuing "kill -9".

I never see anything in the logs that would signal bigger issues at all.

I this a known issue? Do you guys see any obvious explanation in the
configuration of the producer/consumes or brokers (I certainly don't)?

Help would be greatly appreciated.

Viewing all articles
Browse latest Browse all 5648

Trending Articles